Applies to:
Microsoft® Visual Studio® .NET Enterprise Architect Edition
Microsoft .NET Framework
Microsoft Visio®
Summary: Learn about service-oriented architecture (SOA), and how it can be used to create maintainable enterprise-scale systems. (42 printed pages)
To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.
Contents
Traditional Application Design and Architecture
Component-Based Application Architecture
Solution: Service-Oriented Architecture (SOA)
Service-Oriented Architecture (SOA) and the Enterprise
Benefits of a Service-Oriented Architecture
FoodMovers Project Development
Conclusion
In the third section of this project on Visual Studio .NET Enterprise Architect Edition, you'll learn about developing the service-oriented architecture for our fictional company, FoodMovers Distribution Company.
First, I will discuss the traditional application architecture and the problems with it. Then I will discuss the component-based architecture, which solves some of the problems with traditional application architecture but still has its own problems.
Then I will talk about a service-oriented architecture, and how it can be used to create maintainable enterprise-scale systems. We will use Visio 2002 to architect and design services and service interfaces. In the development of the services from the design, we will use Microsoft Visual Studio .NET 2003.
The FoodMovers system uses the service-oriented architecture to design and develop the service interface objects that will expose business processes and data to user interface components.
In this section, I will focus on the design of the data structures and the development of the first of the service interfaces, the Order Manager.
Traditional Application Design and Architecture
When computers first appeared, everything ran on a monolithic machine. The mainframe computer architecture had specific areas of implementation. First, there was a multiple-level data environment where data was stored on disk drives, tape, and sometimes punched cards.
On the other end was the user interface, which was typically a terminal that had a 25x80 character screen for display and a keyboard for input.
In between the data environment and the user interface was where the organization's business processes massaged the data.
This multiple-level architecture worked pretty well for years, as long as access to the resources of the mainframe were carefully controlled and everyone was using the same programming language and programming techniques.
With the advent of the personal computer, things changed. More, easier to use programming languages allowed less-disciplined programmers to create mission-critical programs without the structure enforced by the best practices developed within the organization.
During this phase, which I call traditional application architecture, there are no reusable components. Programs are written without much thought to logical layers. Instead, each application has the data, user interface code, and code that encapsulates the business logic information, all contained in a single executable bundle.
Another feature of the traditional application architecture approach is that proprietary languages and formats for accessing data, processing data and presenting data are used. An example is shown in Figure 1 that has data source where the data resides in different formats: in a SQL Server database, an XML database and a flat file.
Figure 1. Traditional application architecture combines everything into a single program.
The data access code that reads and writes the database is written using proprietary languages and data access techniques. This might be written using standards such as SQL, or could be home-grown code or commercial solutions such as dBase.
This proprietary data access code is used to retrieve data in the format that it understands using the language that it speaks. The business functions are coded in the same layer as the data access. Somewhere hidden in the application is the business logic.
At some point, the data is presented to the user with a user interface, such as a Windows form or console application for execution on the command-line or through a scheduled chron program.
This application architecture is still widely used, because it is easy for a single programmer to write, as long as they don't need much input from other sources. However, there are obvious problems with this approach.
- The application's functions cannot be re-used. For example, the business functions are written for this particular application and for this platform only. They cannot be re-used by other applications.
- It is difficult to debug the program as it grows, and maintain it as it is deployed. Code with different purposes is mixed together. A change to one part of the code could adversely affect other code.
- Security is another problem because the user interface cannot be isolated from the rest of the program. This makes it difficult to apply security mechanisms at the operating system level when the system is physically deployed. For example, the user interface cannot be separated from the business logic by a firewall.
- Traditional application architecture makes it difficult to integrate applications that reside in different platforms. Integrating two applications requires specialized integration code that is only written for these two applications, making it difficult an expensive to do.
- Scalability is all but impossible because it is difficult to spread any part of the application across several physical machines or add machines as load gets heavy.
Component-Based Application Architecture
As systems grew larger and involved more programmers and larger-scale deployments, the problems with the traditional application architecture were addressed with the advent of component-based application architecture.
Component-based architecture first addressed the problem of integrated code by defining layers of functionality. An application requires access to data sources, business logic, and presentation capabilities. If we decouple these functions from each other in the traditional application design, then we will have a place to deploy re-usable components.
The component-based application architecture and its logical layers are shown in Figure 2.
Figure 2. Component-based application architecture
The data layer contains all data sources, whether they are SQL databases, XML documents, flat files, or any other type. Getting data out of the data sources requires a data access layer has functions that connect, query, and update the database.
The data access layer communicates with the business logic layer and provides a uniform view of the data.
The presentation layer components present the processed data to the user, get instructions back, and send everything back down the chain.
The most common component model in the Windows environment is COM. If they are designed correctly, COM components in each of these layers can be re-used by the other components and applications. In the Java world, CORBA is the king, but components can also exist as Java servlets or other variations.
This makes it possible to distribute development tasks across more programmers, and makes the system more robust, scaleable, and maintainable. Using the component-based architecture has been the best way to create systems for the past ten years.
In the modern distributed application environment, however, there are problems with the component-based architecture approach. Writing components that can be shared requires thought, as different programming languages are not as compatible as one might like. A component written in C++ for use in a C++ environment sometimes has difficulty being used in an environment where Visual Basic is the main language. This is because of the way components are loaded and called.
But even if the cross-language problems are fixed, there is a bigger problem: It is very difficult for components to be shared across heterogeneous platforms. That is, calling a COM object from a Java program, or a CORBA object from a Visual Basic application. Cross-platform interoperability is not easy with the component models we have been using for the past decade. And forget about calling a foreign object from beyond the firewall!
When a new component-based application needs to get the data from another component-based application in a different platform, how can this be done? Consider a typical component-based application where the business rules and data are hidden behind a user interface. In order to get the information from one machine to an incompatible system, a human user needs to read the data on one screen and enter it into the other.
If there is no user interface, some integration code must be written on both platforms in order to share information. This is an example of tightly-coupled, application-specific interoperability. The very nature of such interfaces is that they are difficult to write and maintain, and erratic in its behavior; any change to either application usually results in nothing working.
This model is illustrated in Figure 3.
Figure 3. Integration is a problem in both traditional and component-based application design.
This is considered a tightly coupled solution, in which two applications that are being integrated are aware of each others' implementation details. Integrating systems like this requires custom code that could be somewhat fragile. Or, it requires a human, who reads the output from one application and keys it into another application.
What if...?
Having a component-based application design allows the components to be re-used on the same platform and usually the same programming language.
- What if we need to share the business functions throughout heterogeneous platforms in the organization?
- What if we need to share information across the firewall?
- What if we want to share our information with our external trading partners?
We need two things:
- First, every component-based application needs to speak the same language.
- Second, instead of thinking in terms of processes, components, and data, we need to think in terms of services.
Solution: Service-Oriented Architecture (SOA)
Let's re-arrange the layers just a little bit. The data layers and the business layers remain separate, even using the same components as before. However, the business layer components are completely decoupled from the presentation-level components and usually placed on another machine.
In order to get functionality of this "viewless" system, we will add an interface point for the business logic and workflow components and call it a "Service Interface." The service interface wraps the business layer components offering a point of access for any process that needs to access the business logic service. This architecture is shown in Figure 4.
Figure 4. Service-centric application decoupled from the user interface
Notice that there is no longer a presentation layer where humans access the application. We will find out where it went in a minute. By decoupling the user interface from the underlying business logic and data, and then wrapping this functionality with a common interface, we get a "Service Description." I call this a "Web service" when the service interface is described and exposed using the XML-based standards such as SOAP, and WSDL.
When the components are aggregated to define, describe and expose service functionality, and the applications are architected and designed using these service definitions, the whole thing is called a Service-Oriented Architecture (SOA).
It is interesting to note that, by decoupling the presentation layer from the business layer, much more functionality can be added to the service interface. These include routing, referral, transactions, and security. I will discuss these common functions in the service interface and show how to extend and add more layers to it in Section 5, Extensions: Building Blocks for Extra Functionality.
A Service Delivered to Any Platform
Because it uses open standards, business functions that are defined as services can run on different platforms but still can be accessed by each other.
A service definition doesn't have a presentation layer. It is de-coupled from the presentation-level components. This is an advantage because now the service can be delivered to any user interface that resides on any platform. Because the service interface is exposed using the standards, the only requirement of the client application is its understanding of these XML-based standards.
An example of a service delivered to any platform and integrated with any service is shown in Figure 5.
Figure 5. Web Services Integration Architecture
In order to access the service, you need to build some kind of presentation layer. This presentation layer can be designed for consumption by human eyeballs, or can just as easily be designed for consumption by another computer program. In fact, even the platform on which the presentation layer is written need not be the same as the platform that runs the service.
An ASP.NET Web service can be delivered to a client mobile application or a Java console application. On the other hand, a J2EE Web service can be delivered to an ASP.NET Web application or a Windows application. Notice, also, that two Web services can communicate with each other. In the FoodMovers system, the Warehouse Manager requests information from the Inventory Manager.
This is the beauty of using Web services, where the business logic is wrapped with a common interface that most of the software vendors agree to use. Therefore, integration is not a problem between different technologies.
Service-Oriented Architecture (SOA) and the Enterprise
After discussing the service-based application design and architecture, let's discuss what this means for enterprise applications.
A typical enterprise application needs to be distributed to many machines for scalability, availability, and performance. The application architecture and design must allow this type of distribution.
The application designed by SOA is based on services that interact with business components. Each service defines a particular business function. These services interact with each other to accomplish business processes. In the FoodMovers project, they will have names such as PlaceOrder or ReceiveShipment.
This service-based design provides the key to flexibility. It allows functions that are designed, implemented, and exposed as services to make use of other services regardless of where they are, on which physical machine they are deployed, and so on. These services aggregated define a system consisting of organizational functions and, when exposed to each other, business relationships.
For these services to communicate there must be some kind of virtual conduit for messages that allows them to communicate:
- No matter what type of machine they are deployed on
- No matter where they physically reside
- No matter what language they are written in
- No matter where in the world they are
This virtual conduit is called a "message bus," and can exist on any physical network where all machines reside. A message bus is not the wire. Rather, it is a conceptual artifact that rides on a physical carrier. The beauty of the message bus is that it provides a standard protocol for any message to be communicated between interested parties.
For business-to-business communication, the message bus could travel on the Internet. For internal integration, the message bus could utilize the local intranet. However, the message bus could also send messages over a queuing mechanism such as Microsoft MSMQ or IBM MQ Series. I will talk more about the message bus in Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration.
To summarize and extend, then, a service:
- Can correspond to real-life business activities
- Is the interface for a business function or functions (a service can have many operations)
- Is usually discoverable, but not always
- Has a clearly defined interface, which is exposed through some kind of standard contract
- Interacts with other services and components using loosely-coupled, message-based architecture and synchronous or asynchronous access models
- Uses standards for communication
- Provides interoperability
- Is up and running all the time, unlike components that must be instantiated before use
The service interface layer communicates with the business logic classes, business workflows, and other components at the business layer, which then communicates with the data access classes, which connect to data sources to access data.
Benefits of a Service-Oriented Architecture
SOA is beneficial in the enterprise applications because:
- Complexity is encapsulated. Any system has an inherent complexity the details of which are not important to the users of the system. Service-oriented architectures recognize this complexity, and provide a way to encapsulate it in order to hide it from the consumer of the service.
- Code is mobile. In distributed applications, components can reside on any machine, anywhere in the world and still be accessed the same way. The client or other services that access the service doesn't care where the service is, or in which language it is written.
- Developer roles are focused. A service-oriented architecture will force applications to have many different layers. Developers who are working on the service layer must know transactions, reliability, and messaging, but the client only knows their own programming language in order to develop in the environment with which they are familiar. Services appear to client programmers as components, using terms and concepts that they have been using in their development activities for years.
- Development efforts can be done in parallel. Having many application layers in a project means multiple teams can work on their own components independently and in parallel after the architecture and design is complete. This solves many problems in enterprise-scale application development.
- The service definition supports multiple client types. Services and clients can be written in any language and deployed in any platform, as long as they can speak the standard languages and protocols that are used.
- More security can be included. By adding the additional service interface layer, it is possible to provide more security. Different parts of the final application that need different security measures can be deployed behind firewalls. These firewalls can be secured as tight as the components require and still be accessed by internal or external components.
- More re-usability of components across the heterogonous platforms is possible. There are no language and platform integration problems when the functions are defined as services. The service components can be re-used by other components or services.
FoodMovers Project Development
Now that I have discussed the advantages of a service-oriented architecture, let's see how the FoodMovers project is developed using that architecture.
In this section, I will discuss one of the services, the Order Manager, and go through development tasks step by step. I will show plenty of code, but I will not design the entire system here. After the design, we will build the first layers of our project, starting with the data layer, then the business layer, which includes the service interfaces, then to the presentation layer, which will provide human and machine user access to the system.
The development of the project starts with the Order Manager UML design. The Project Architect designs the Order Manager using Microsoft Visio. Visio is a powerful tool for designing all parts of the system, from database design to Windows forms. For now, we will be using the UML-generation capabilities of Visio.
Order Manager UML Design
The Order Manager UML design shows the components and the relationship between them. This is shown in Figure 6.
Figure 6. Order Manager UML Design
This design shows all of the operations that will be defined in the service, as well as all of the data sources that will be used. From this UML diagram, Visio can generate empty classes in C#. These empty classes will be distributed as a template to all members of the team that need to develop the classes.
Once the UML diagrams are complete, development of the FoodMovers system follows the design that the Project Architect provided to the project teams.
First, let's talk more about the development of the data layer.
Data
The data layer consists of data sources and data access. We will be using Microsoft SQL Server exclusively, so the data sources are all SQL tables.
There are ten tables in our system:
- Categories
- Inventory
- Items
- Stores
- StoreOrders
- StoreOrderItems
- Suppliers
- SupplierOrders
- SupplierOrderItems
- Users
We will not be managing each of the ten tables individually, however. Instead, we will be managing the tables as eight groups. A store order will have one or more items, so the StoreOrders and StoreOrderItems tables are joined in the database schema by a relationship based on the order identifier, OrderID. The same goes for SupplierOrders and SupplierOrderItems.
XML Schema, DataSet, and DataTable
We will be using the .NET DataSet object to store data for local use. The DataSet object provides a handy interface to the physical database. If you think of the database as a source of water, and a SQL SELECT statement as a faucet, you can think of the DataTable object as a set of cups that hold the water. Once the cups are full, you can turn off the faucet and still have access to the water that came out, while releasing access to the faucet so the next person can fill their cup.
In terms of performance, this provides a way of connecting to the database just long enough to get the data, then shutting down the connection allowing others to access it.
These cups will be created as a class library, using Visual Studio .NET. Creating these cups requires many class definitions with methods to access and update the data. Fortunately, Visual Studio provides a tool for creating these complex class libraries from a simple XML Schema.
XML Schema
Visual Studio .NET provides full support for W3C XML Schema Definition (XSD) syntax. In Visual Studio you can create XSD schemas from scratch, or create them directly from the SQL Server tables.
To do that, I added a new XSD schema item to the Common/Data project using Project...Add New Item...XML Schema. Then, I just dragged the table from the Server Explorer to the XSD editing window. This is shown in Figure 7.
Figure 7. Creating XSD schemas from database tables
Notice that the highest-level element is called Document
. I want to change that to StoreData
, and rename its sub-element Store
to indicate that each record holds a single store. Then, I want to change StoreID
to ID
and Name
to StoreName
because the Project Architect's design document told us to.
Making these changes results in an XML Schema that we need. By clicking on the "XML" tab, we can see the XSD schema that was created. This is shown below.
<?xml version="1.0" encoding="utf-8" ?>
<xs:schema id="StoresData"
targetNamespace="http://FoodMovers.com/schemas/StoreData"
elementFormDefault="qualified"
xmlns="http://FoodMovers.com/schemas/StoreyData"
xmlns:mstns="http://FoodMovers.com/schemas/StoreData"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema"
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
<xs:element name="StoreData">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="Store">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:int" />
<xs:element name="StoreName"
type="xs:string" minOccurs="0" />
<xs:element name="Street" type="xs:string"
minOccurs="0" />
<xs:element name="City" type="xs:string"
minOccurs="0" />
<xs:element name="State" type="xs:string"
minOccurs="0" />
<xs:element name="Zipcode" type="xs:string"
minOccurs="0" />
<xs:element name="CreditOK"
type="xs:boolean" minOccurs="0" />
<xs:element name="PaymentTerms"
type="xs:string" minOccurs="0" />
<xs:element name="ContactName"
type="xs:string" minOccurs="0" />
<xs:element name="ContactPhone"
type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
<xs:unique name="DocumentKey1" msdata:PrimaryKey="true">
<xs:selector xpath=".//mstns:Store" />
<xs:field xpath="mstns:ID" />
</xs:unique>
</xs:element>
</xs:schema>
Switching back to the graphical view,
I can now tell Visual Studio to generate a dataset for us using Schema...Generate Dataset. In the Solution Explorer window, we see a new file, StoreData.cs
, which is the dataset that defines the XML Schema.
The class created with the Generate Dataset option also contains a DataTable object. The DataTable object will actually hold the data that is managed by the class. A section of this auto-generated class is shown below.
//------------------------------------------------------------------------
// <autogenerated>
// This code was generated by a tool.
// Runtime Version: 1.1.4322.510
//
// Changes to this file may cause incorrect behavior and will be lost
// if the code is regenerated.
// </autogenerated>
//------------------------------------------------------------------------
namespace FoodMovers.Common.Data {
using System;
using System.Data;
using System.Xml;
using System.Runtime.Serialization;
[Serializable()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Diagnostics.DebuggerStepThrough()]
[System.ComponentModel.ToolboxItem(true)]
public class StoreData : DataSet {
private StoreDataTable tableStore;
public StoreData() {
this.InitClass();
System.ComponentModel.CollectionChangeEventHandler
schemaChangedHandler =
new System.ComponentModel.CollectionChangeEventHandler(
this.SchemaChanged);
this.Tables.CollectionChanged += schemaChangedHandler;
this.Relations.CollectionChanged += schemaChangedHandler;
}
internal void InitVars() {
this.columnID = this.Columns["ID"];
this.columnStoreName = this.Columns["StoreName"];
this.columnStreet = this.Columns["Street"];
this.columnCity = this.Columns["City"];
this.columnState = this.Columns["State"];
this.columnZipcode = this.Columns["Zipcode"];
this.columnCreditOK = this.Columns["CreditOK"];
this.columnPaymentTerms = this.Columns["PaymentTerms"];
this.columnContactName = this.Columns["ContactName"];
this.columnContactPhone = this.Columns["ContactPhone"];
}
public class StoreDataTable : DataTable,
System.Collections.IEnumerable {
private DataColumn columnID;
private DataColumn columnStoreName;
private DataColumn columnStreet;
private DataColumn columnCity;
private DataColumn columnState;
private DataColumn columnZipcode;
private DataColumn columnCreditOK;
private DataColumn columnPaymentTerms;
private DataColumn columnContactName;
private DataColumn columnContactPhone;
internal StoreDataTable() :
base("Store") {
this.InitClass();
}
...
DataTableMapping
Once I create a class library for the DataSet object, I need to create some kind of interface to connect to the database and turn on the faucet. I will be using the SqlDataAdapter object to connect to the database. Then I need to show how the fields in the SQL Server tables are related to the DataTable objects in the DataSet
class. This is done with a DataTableMapping object.
With the DataTableMapping object, I will create a map between each field in the database table and named column in the DataTable. This mapping is done in the DataAccess/Stores.cs
class. The Stores
class with the DataTableMapping object is shown below.
static SqlDataAdapter dsCommand;
static SqlConnection myConn = new SqlConnection();
public Stores()
{
dsCommand = new SqlDataAdapter();
dsCommand.SelectCommand = new SqlCommand();
dsCommand.SelectCommand.Connection = new
SqlConnection(FoodMovers.Common.Configuration.strConn);
DataTableMapping StoreMap =
dsCommand.TableMappings.Add("Stores", "Store");
StoreMap.ColumnMappings.Add("StoreID", "ID");
StoreMap.ColumnMappings.Add("Name", "StoreName");
StoreMap.ColumnMappings.Add("Street", "Street");
StoreMap.ColumnMappings.Add("City", "City");
StoreMap.ColumnMappings.Add("State", "State");
StoreMap.ColumnMappings.Add("Zipcode", "Zipcode");
StoreMap.ColumnMappings.Add("CreditOK", "CreditOK");
StoreMap.ColumnMappings.Add("PaymentTerms", "PaymentTerms");
StoreMap.ColumnMappings.Add("ContactName", "ContactName");
StoreMap.ColumnMappings.Add("ContactPhone", "ContactPhone");
}
First, the SqlDataAdapter object is instantiated and the database is connected with the appropriate connection string.
Then, a DataTableMapping object is instantiated, which connects the Stores
table in the SQL Server database with the Store
DataSet object, which is the class that was created when I generated a dataset form our XSD schema above.
From there, I need to map each field in the database with its counterpart in the DataSet. Remember when I changed the StoreID
field to ID
and Name
to StoreName
? The first two ColumnMappings make these maps. The rest of the columns map name-for-name with the database table.
Once this class is created, store data can be accessed from the database. Let's take a look at a typical invocation of the database. A method to select a store from the database is listed below.
1 public StoreData GetStore(int StoreID)
2 {
3 StoreData data = new StoreData();
4 SqlCommand command = dsCommand.SelectCommand;
5 command.CommandText = "GetStore";
6 command.CommandType = CommandType.StoredProcedure;
7 command.Parameters.Clear();
8 SqlParameter param = new SqlParameter("@StoreID",
9 SqlDbType.NVarChar, 255);
10 param.Value = StoreID;
11 command.Parameters.Add(param);
12 dsCommand.Fill(data,"Stores");
13 return data;
14 }
This method calls a stored procedure. Let's go through it line-by-line. In line 3, the StoreData
object is instantiated.
This will create the structure shown above, with all of the column mappings in place.
On line 4, a SqlCommand object is instantiated and set to the SelectCommand
property of the SqlDataAdapter object.
The SqlDataAdapter object manages the interaction between the DataSet object and the SQL Server database. The architecture is shown in Figure 8.
Figure 8. DataAdapater architecture
In order to read the database, the SelectCommand
property points to something that results in a SQL SELECT statement. In our case, we are using a stored procedure called GetStore
, which is specified on line 5. This stored procedure is shown below.
CREATE PROCEDURE dbo.GetStore
(
@StoreID int
)
AS
SELECT * FROM Stores WHERE StoreID=@StoreID
Line 6 indicates that we are calling a stored procedure instead of a SQL statement directly.
Lines 7-11 set the parameter StoreID
that the stored procedure needs.
Finally, the Fill command on line 12 executes the stored procedure and loads the StoreData
DataSet class, which maps the appropriate table fields into the TableData columns. This "opens the faucet," as I mentioned above. The method returns the filled "cups," or DataSet object.
That's the way it works for selecting records and filling a data table. What about the other way? What if we want to update information in the database?
Modifying data in a database is more complicated than simply retrieving data with a SELECT statement. There are three possible things that the object must be prepared to do in order to modify data.
- If a new record is to be added, it must issue an INSERT command.
- If an existing record is to be removed, it must issue a DELETE command.
- If an existing record is to be modified, it must issue an UPDATE command.
The DataAdapter object has three properties, which correspond to the three things it must be prepared to do. Like the SelectCommand
property, the InsertCommand
, DeleteCommand
, and UpdateCommand
properties must be set to something that results in a SQL INSERT, DELETE, or UPDATE command. Each record in the DataSet, then is compared with the records inside the SQL table and the appropriate command is issued. All of this happens when the DataAdapter Update
method is called.
Relationships
But let's move back to the XML Schema stage and build something that gets updated. As I mentioned earlier, two tables, StoreOrders
and StoreOrderItems
have a one-to-many relationship, since an order can have many line items. This relationship is enforced by SQL Server and looks like the illustration in Figure 9.
Figure 9. Store order database tables and relationships
I want to create our data objects for store orders, but, because of their relationship, I need to do a couple of extra things. Since the database manager enforces referential integrity, I want to make sure our DataTable objects do the same. For this reason, there is a Relationship object that I can use to assert the relationship and assure that data has integrity before writing out to the database.
To create the DataSet, I use the same technique as I did for the Stores
table, except that I drag both the StoreOrders
and StoreOrderItems
tables into the XML Schema editor. This is shown in Figure 10.
Figure 10. Adding two related tables to the schema editor
Notice that the relationship that is maintained by SQL Server is missing. So now, I must create a relationship in the XSD by dragging from the OrderID
element in the StoreOrder
table into the OrderID
element of the StoreOrderItems
table. This raises a dialog box, shown in Figure 11.
Figure 11. Edit Relation dialog box
Taking the defaults will create a relationship as shown in Figure 12.
Figure 12. Indicating a relationship between two table elements
Notice that the OrderItem
element has two elements that were not in the OrderItems
database table: Description
and ShelfAddress
. These elements were added manually, because there are services that need that information, and it is easier to put them in the table now, rather than making large joins later.
By giving the relationship a name and setting the elements and fields, it will appear in the XSD file, as shown below.
<?xml version="1.0" encoding="utf-8" ?>
<xs:schema id="StoreOrderData"
targetNamespace="http://FoodMovers.com/schemas/StoreOrderData"
elementFormDefault="qualified"
xmlns="http://FoodMovers.com/schemas/StoreOrderData"
xmlns:mstns="http://FoodMovers.com/schemas/StoreOrderData"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:msprop="urn:schemas-microsoft-com:xml-msprop">
<xs:element name="StoreOrders">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="StoreOrder">
<xs:complexType>
<xs:sequence>
<xs:element name="OrderID"
msdata:ReadOnly="true"
msdata:AutoIncrement="true"
type="xs:int" />
<xs:element name="OrderDate"
type="xs:dateTime" />
<xs:element name="NeedBy"
type="xs:dateTime" />
<xs:element name="StoreID" type="xs:int" />
<xs:element name="UserID"
type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
<xs:unique name="DocumentKey1">
<xs:selector xpath=".//mstns:StoreOrder" />
<xs:field xpath="mstns:OrderID" />
</xs:unique>
<xs:key name="RelationshipKey" msdata:PrimaryKey="true">
<xs:selector xpath=".//mstns:StoreOrder" />
<xs:field xpath="mstns:OrderID" />
</xs:key>
</xs:element>
<xs:element name="OrderItems">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="OrderItem">
<xs:complexType>
<xs:sequence>
<xs:element name="OrderID" type="xs:int" />
<xs:element name="UPC" type="xs:string" />
<xs:element name="Quantity"
type="xs:int" />
<xs:element name="Shipped"
type="xs:dateTime" minOccurs="0"
maxOccurs="1" nillable="1"
msprop:nullValue="_throw" />
<xs:element name="Description"
type="xs:string" nillable="true"
msprop:nullValue="_empty" />
<xs:element name="ShelfAddress"
type="xs:string" nillable="true"
msprop:nullValue="_null" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
<xs:keyref name="StoreOrdersStoreOrderItem"
refer="RelationshipKey"
msdata:AcceptRejectRule="Cascade"
msdata:DeleteRule="Cascade"
msdata:UpdateRule="Cascade">
<xs:selector xpath=".//mstns:OrderItem" />
<xs:field xpath="mstns:OrderID" />
</xs:keyref>
</xs:element>
</xs:schema>
I will do the same for this class as the stores by selecting Schema...Generate Dataset.
This will create StoreOrderData.cs
, which is the DataSet and DataTable objects, in our Common/Data project folder. Now for the creation of the data access classes.
Again, I need to use the DataTableMapping object to show how to map from the SQL tables to the DataTables. The StoreOrders
data access class is shown below.
1 public StoreOrders()
2 {
3 dsCommand = new SqlDataAdapter();
4 dsCommand.SelectCommand = new SqlCommand();
5 dsCommand.SelectCommand.Connection = new
6 SqlConnection(FoodMovers.Common.Configuration.strConn);
7 dsCommand.InsertCommand = new SqlCommand();
8 dsCommand.InsertCommand.Connection = new
9 SqlConnection(FoodMovers.Common.Configuration.strConn);
10 dsCommand.UpdateCommand = new SqlCommand();
11 dsCommand.UpdateCommand.Connection = new
12 SqlConnection(FoodMovers.Common.Configuration.strConn);
13 dsCommand.DeleteCommand = new SqlCommand();
14 dsCommand.DeleteCommand.Connection = new
15 SqlConnection(FoodMovers.Common.Configuration.strConn);
16
17 DataTableMapping StoreOrderMap =
18 dsCommand.TableMappings.Add("StoreOrders", "StoreOrder");
19 StoreOrderMap.ColumnMappings.Add("OrderID", "OrderID");
20 StoreOrderMap.ColumnMappings.Add("OrderDate", "OrderDate");
21 StoreOrderMap.ColumnMappings.Add("NeedBy", "NeedBy");
22 StoreOrderMap.ColumnMappings.Add("StoreID", "StoreID");
23 StoreOrderMap.ColumnMappings.Add("UserID", "UserID");
24
25 DataTableMapping StoreOrderItemMap =
26 dsCommand.TableMappings.Add("OrderItems", "OrderItem");
27 StoreOrderItemMap.ColumnMappings.Add("OrderID", "OrderID");
28 StoreOrderItemMap.ColumnMappings.Add("UPC", "UPC");
29 StoreOrderItemMap.ColumnMappings.Add("Shipped", "Shipped");
30 StoreOrderItemMap.ColumnMappings.Add("Quantity",
31 "Quantity");
32 StoreOrderItemMap.ColumnMappings.Add("Description",
33 "Description");
34 StoreOrderItemMap.ColumnMappings.Add("ShelfAddress",
35 "ShelfAddress");
36 }
Notice that I set the SelectCommand
property on lines 4-6.
I also created a InsertCommand
, UpdateCommand
, and DeleteCommand
properties on lines 7-15. This is so we can use the SqlDataAdapter Update
method to automatically make the appropriate changes to the database.
Also, note that there are two DataTableMapping objects, one for each table. Notice the ColumnMappings on lines 32-35. These are the two fields, Description
and ShelfAddress
, that were added to the XSD schema. I need to create maps for them, and then make sure they are available in the SQL query. We will see that in a minute.
Now, to fill out DataTable, I need to access information from the SQL database table, but with a twist. This is shown below.
1 public StoreOrderData GetStoreOrder(int OrderID)
2 {
3 StoreOrderData data = new StoreOrderData();
4 SqlCommand command = dsCommand.SelectCommand;
5 SqlParameter param;
6
7 command.CommandText = "GetStoreOrder";
8 command.CommandType = CommandType.StoredProcedure;
9 command.Parameters.Clear();
10 param = new SqlParameter("@OrderID", SqlDbType.NVarChar, 255);
11 param.Value = OrderID;
12 command.Parameters.Add(param);
13 dsCommand.Fill(data,"StoreOrders");
14
15 command.CommandText = "GetStoreOrderItems";
16 command.CommandType = CommandType.StoredProcedure;
17 command.Parameters.Clear();
18 param = new SqlParameter("@OrderID", SqlDbType.NVarChar, 255);
19 param.Value = OrderID;
20 command.Parameters.Add(param);
21 dsCommand.Fill(data,"OrderItems");
22
23 data.Relations.Add("StoreOrder_StoreOrderItem",
24 data.Tables["StoreOrder"].Columns["OrderID"],
25 data.Tables["OrderItem"].Columns["OrderID"]);
26
27 return data;
28 }
In this case, I need to access the data from each table independently.
This might seem kind of counter-intuitive, since SQL JOINs were created for accessing related database tables and returning a record set. The problem with JOINs in this case is that a SQL SELECT query will return a flat record set; all fields in the joined database will appear at the same level. This is fine for doing many applications, but in this case I am creating a hierarchical data set and specifying those relationships through an XML schema. So we perform two individual queries.
Finally, I need to indicate the relationship between the two tables. This is taken care of with the Relations object. By adding the Relations object to the StoreOrderData
object, I assert that these are related, and should have some referential integrity.
This type of XSD → DataSet/DataTable → data access classes technique is repeated for each table or related set.
Managers
Now that the data classes are created, we can move on to the service interfaces. As I mentioned earlier, services are exposed in the presentation layer. These are called "service interfaces," and are really not much more than a pass-through between the business logic and the presentation layer.
In the FoodMovers system, these service interfaces have been collected into groups called Managers. We have four different managers:
UpdateManager
OrderManager
WarehouseManager
InventoryManager
Each of these managers exposes business processes as Web services. First, let's see how the business layer classes work, and then we'll see how the managers expose this logic.
The BusinessLogic
classes contain all of the logic required to run the business. This functionality is sorted into three different classes:
CommonLogic.cs
OrderLogic.cs
WarehouseLogic.cs
All Manager programs include the namespace (FoodMovers.BusinessLogic
) so they can have this functionality.
A typical, but simple, method is shown below.
public StoreData GetStore(int StoreID)
{
Stores accStores = new Stores();
StoreData datStores = new StoreData();
if (datUser.User.Count > 0 &&
datUser.User[0].StoreID == StoreID)
{
datStores = accStores.GetStore(StoreID);
return datStores;
}
else
return null;
}
This method returns a StoreData
object for a store with a given StoreID
.
First it checks to make sure that a user is logged in (User.Count > 0
) and that the current user is allowed to access the store, then calls the GetStore
method in the data access object, Stores
.
There will eventually be hundreds of methods in the BusinessLogic
classes.
Now let's expose those as Web services through our Manager service interfaces as shown below.
[WebMethod(Description="Returns a StoreData class with a single store")]
public StoreData GetStore(int StoreID)
{
return CommonLogic.GetStore(StoreID);
}
Creating a Web service is as simple as creating a normal public method and exposing it with a [WebMethod]
declaration.
It is, of course, possible to put more logic at this layer, but I would rather keep the business logic in the classes I mentioned earlier to avoid complexity at the service interface (Web service) level.
Once a Web service class containing a WebMethod
is written, it can be accessed through IIS. Creating a WebMethod
will expose the otherwise normal method as a Web service operation. Setting the OrderManager.asmx
page as the startup project and running the project results in a Web page that shows the methods available as Web service operations. This is shown in Figure 13.
Figure 13. Methods available as Web services from OrderManager
The Web page that is shown here is just a test to check the functionality of the service. The real interface is exposed using an international standard called WSDL.
WSDL
The Web Service Description Language (WSDL) is a W3C specification that exposes the services and operations of our Web service. WSDL takes the form of an XML document, which is shared between the Web service server and the client. The document contains everything the client needs to know about the service in order to access the operations and the data.
This includes the names of the operations, the parameters required and their data types, and the format of the response document sent back from the server. A WSDL document for the GetStore
operation is shown below.
1 <?xml version="1.0"?>
2 <definitions
3 xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
4 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
5 xmlns:i1="http://FoodMovers.com/schemas/StoreData"
6 xmlns:s="http://www.w3.org/2001/XMLSchema"
7 xmlns:s0="http://FoodMovers.com/services/OrderManager"
8 xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
9 xmlns:i0="http://FoodMovers.com/schemas/UserData"
10 xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
11 xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
12 xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
13 targetNamespace="http://foodmovers.com/services/OrderManager"
14 xmlns="http://schemas.xmlsoap.org/wsdl/">
15 <import namespace="http://FoodMovers.com/schemas/StoreData"
16 location="http://localhost/FoodMovers/FoodMoversWebService
17 Projects_OrderManager/OrderManager.asmx?schema=StoreData"/>
18 <types>
19 <s:schema>
20 <s:element name="GetStore">
21 <s:complexType>
22 <s:sequence>
23 <s:element name="StoreID" type="s:int"/>
24 </s:sequence>
25 </s:complexType>
26 </s:element>
27 <s:element name="Store">
28 <s:complexType>
29 <s:sequence>
30 <s:element name="ID" type="s:int"/>
31 <s:element name="StoreName" type="s:string"/>
32 <s:element name="Street" type="s:string"/>
33 <s:element name="City" type="s:string"/>
34 <s:element name="State" type="s:string"/>
35 <s:element name="Zipcode" type="s:string"/>
36 <s:element name="CreditOK" type="s:boolean"/>
37 <s:element name="PaymentTerms" type="s:string"/>
38 <s:element name="ContactName" type="s:string"/>
39 <s:element name="ContactPhone" type="s:string"/>
40 </s:sequence>
41 </s:complexType>
42 </s:element>
43 <s:element name="GetStoreResponse">
44 <s:complexType>
45 <s:sequence>
46 <s:element minOccurs="0" name="GetStoreResult">
47 <s:complexType>
48 <s:sequence>
49 <s:element msdata:IsDataSet="true"
50 name="StoreData">
51 <s:complexType>
52 <s:choice maxOccurs="unbounded">
53 <s:element ref="Store"/>
54 </s:choice>
55 </s:complexType>
56 </s:element>
57 </s:sequence>
58 </s:complexType>
59 </s:element>
60 </s:sequence>
61 </s:complexType>
62 </s:element>
63 </s:schema>
64 </types>
65 <message name="GetStoreSoapIn">
66 <part name="parameters" element="s0:GetStore"/>
67 </message>
68 <message name="GetStoreSoapOut">
69 <part name="parameters" element="s0:GetStoreResponse"/>
70 </message>
71 <portType name="OrderManagerServiceSoap">
72 <operation name="GetStore">
73 <documentation>Returns a StoreData class with a single
74 store</documentation>
75 <input message="s0:GetStoreSoapIn"/>
76 <output message="s0:GetStoreSoapOut"/>
77 </operation>
78 </portType>
79 <binding name="OrderManagerServiceSoap"
80 type="s0:OrderManagerServiceSoap">
81 <soap:binding transport="http://schemas.xmlsoap.org/soap/http"
82 style="document"/>
83 <operation name="GetStore">
84 <soap:operation
85 soapAction=
86 "http://foodmovers.com/services/OrderManager/GetStore"
87 style="document"/>
88 <input>
89 <soap:body use="literal"/>
90 </input>
91 <output>
92 <soap:body use="literal"/>
93 </output>
94 </operation>
95 </binding>
96 <service name="OrderManagerService">
97 <port name="OrderManagerServiceSoap"
98 binding="s0:OrderManagerServiceSoap">
99 <soap:address
100 location=
101 "http://localhost/FoodMovers/FoodMoversWebService
102 Projects_OrderManager/OrderManager.asmx"/>
103 </port>
104 </service>
105 </definitions>
This is an important document because it allows any client that
understands the syntax to be able to access the service.
That means that we can create a client in Visual Studio .NET, but that our vendors can use whatever platform they have, as long as it understands the WSDL standard.
WSDL looks complicated, but it is really quite simple. I like to read from the bottom up. First, on line 96, is the name of the service. Services are attached to ports that expose the operations. So moving up, we can see that the port, OrderManagerServiceSoap
, is defined on line 71.
In this simple case, there is only a single operation, GetStore
. The inputs and outputs are defined on lines 75 and 76. Following up the document, we see that GetStoreSoapIn
is defined in the message element online 65. That points to an element, GetStore
, in the namespace indicated by the namespace prefix s0
. The s0
namespace prefix is defined on line 7, which is the structure, StoreData
, that I defined earlier as a DataSet class. That structure is serialized as an XSD schema in lines 19-63. The element that defines the input, GetStore
, is defined in lines 20-26.
Moving back down to the output element on line 76, we can see that it is expressed as s0:GetStoreSoapOut
. This is resolved in the message part on line 69 as another element with the s0
namespace prefix. That element, GetStoreResponse
, is defined in lines 43-62. The guts of the response, the Store
element itself, is defined on lines 27-42, and referenced on line 53.
The interaction between the client and the server is shown in the following two code samples. First is the SOAP request message:
<soap:Envelope
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<GetStore
xmlns="http://foodmovers.com/services/OrderManager">
<StoreID>112</StoreID>
</GetStore>
</soap:Body>
</soap:Envelope>
The request message above is followed by this response:
<soap:Envelope
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<GetAllStoresResponse
xmlns="http://foodmovers.com/services/OrderManager">
<GetAllStoresResult>
<StoreData
xmlns="http://FoodMovers.com/schemas/StoreData">
<Store>
<ID>112</ID>
<StoreName>Kargar's Halal Meat Store</StoreName>
<Street>1546 W. 70th Ave.</Street>
<City>Denver</City>
<State>CO</State>
<Zipcode>80211</Zipcode>
<CreditOK>true</CreditOK>
<PaymentTerms>Net 30</PaymentTerms>
<ContactName>Mr. Karger</ContactName>
<ContactPhone>303-337-3121</ContactPhone>
</Store>
</StoreData>
</GetAllStoresResult>
</GetAllStoresResponse>
</soap:Body>
</soap:Envelope>
As you can see, the interaction has everything necessary
for the client to understand the information exposed by the server.
So how do we get the information into the client so it will know how to create the SOAP request and response messages?
Proxies
Any Web service can be accessed in Visual Studio by adding a Web reference. This works the same way as adding a normal reference, except that the interface information is retrieved from a WSDL document instead of the .NET or COM object.
However, adding a Web reference requires the programmer to think about Web services and the method with which the service is accessed. If we really want to make accessing Web services as seamless as accessing a local object, there is another way.
The .NET Framework ships with a utility called WSDL.EXE. This tool can be used to create a class from any WSDL document. By creating a static class, a programmer just needs to include that class and program against it like any other class.
The class that WSDL.EXE creates has all of the logic for connecting to the service, creating SOAP request messages, and accessing SOAP response messages.
For our project, I will create a proxy for each service interface manager in our system. WSDL.EXE is a command-line program. I will be using the command shown below.
wsdl /l:CS /out:BusinessLogic/Proxy/OrderManagerProxy.cs
http://foodmovers.com/FoodMovers/FoodMoversWebServiceProjects_
OrderManager/OrderManager.asmx
/n:FoodMovers.BusinessLogic.Proxy
I include the command in the build list so that the proxies are always up-to-date.
The C# class for the OrderManager proxy is over 4,000 lines long, but part of it is shown below.
//------------------------------------------------------------------------
// <autogenerated>
// This code was generated by a tool.
// Runtime Version: 1.1.4322.510
//
// Changes to this file may cause incorrect behavior and will be lost
// if the code is regenerated.
// </autogenerated>
//------------------------------------------------------------------------
//
// This source code was auto-generated by wsdl, Version=1.1.4322.510.
//
namespace FoodMovers.BusinessLogic.Proxy {
using System.Diagnostics;
using System.Xml.Serialization;
using System;
using System.Web.Services.Protocols;
using System.ComponentModel;
using System.Web.Services;
using System.Data;
using System.Xml;
using System.Runtime.Serialization;
/// <remarks/>
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Web.Services.WebServiceBindingAttribute(
Name="OrderManagerServiceSoap",
Namespace="http://foodmovers.com/services/OrderManager")]
public class OrderManagerService :
System.Web.Services.Protocols.SoapHttpClientProtocol {
public OrderManagerService() {
this.Url = "http://foodmovers.com/FoodMovers/" +
"FoodMoversWebServiceProjects_OrderManager/" +
"OrderManager.asmx";
}
[System.Web.Services.Protocols.SoapDocumentMethodAttribute
("http://foodmovers.com/services/OrderManager/GetStore",
RequestNamespace="http://foodmovers.com/services/OrderManager",
ResponseNamespace="http://foodmovers.com/services/OrderManager",
Use=System.Web.Services.Description.SoapBindingUse.Literal,
ParameterStyle=
System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
public StoreData GetStore(int StoreID) {
object[] results = this.Invoke("GetStore", new object[] {
StoreID});
return ((StoreData)(results[0]));
}
public System.IAsyncResult BeginGetStore(int StoreID,
System.AsyncCallback callback, object asyncState) {
return this.BeginInvoke("GetStore", new object[] {
StoreID}, callback, asyncState);
}
public StoreData EndGetStore(System.IAsyncResult asyncResult) {
object[] results = this.EndInvoke(asyncResult);
return ((StoreData)(results[0]));
}
...
As you can see,
the Url for the Web service is listed, plus classes for all of the operations that are exposed by the WSDL document. All data structures are included in this proxy, as well, so the programmer only needs to include a reference to this class in order to get everything to do with the service.
So let's create a user interface using the proxy as our class.
Interfaces
All user access to the system is through the presentation layer. In our case, the presentation layer is exposed through the proxy I just created. Let's create a simple WinUI application that uses the structure I have created. The application shows the address, contact, and credit information for stores in our database. Figure 14 shows the screen.
Figure 14. WinUI application that uses the OrderManager proxy class
The application has a combination list box that displays all stores. When a store is selected, the RTF box fills with information about the individual store. This is a pretty simple application to write. Here are the steps:
- Create two form objects,
cmbStores
(combination list box), andrtfStoreInfo
(rich text box) - Add references with Project...Add Reference and
using
directives:using System.Web.Services;
using FoodMovers.BusinessLogic.Proxy;The
FoodMovers.BusinessLogic.
Proxy
namespace contains all of the proxy classes that were created with WSDL.EXE. TheSystem.Web.Services
namespace is required because the proxy classes use them. This is the only indication to the programmer that there is something special about these classes.
Declare the OrderManager
service and StoreData
objects:
OrderManagerService svcOrderManager = new OrderManagerService();
StoreData datStores = new StoreData();
Now that the references are made and the directives declared,
- I need to instantiate the
OrderManagerService
object that has all of the Web service operations (WebMethod
) and the data classes (DataSet). - Load the combination box by accessing
GetAllStores()
;private void frmMain_Load(object sender, System.EventArgs e)
{
datStores = svcOrderManager.GetAllStores();
for (int i = 0; i < datStores.Store.Count; i++)
cmbStores.Items.Add(datStores.Store[i].StoreName);
cmbStores.SelectedIndex = 0;
cmbStores_SelectedIndexChanged(sender, e);
}First, call the
GetAllStores
method to filldatStores
with a list of all stores in the database. From this, just get the
StoreName
property and fill the combination box.- Display single store information by accessing
GetStore(StoreID);
private void cmbStores_SelectedIndexChanged(
object sender, System.EventArgs e)
{
string strRTF;
int iStore = cmbStores.SelectedIndex;
strRTF = @"{\rtf" +
@"{\fonttbl{\f0\fswiss\fcharset0\fprq2" +
@"{\*\panose 020b0604030504040204}Verdana;}}" +
@"{\colortbl;\red0\green0\blue0;" +
@"\red64\green64\blue200;" +
@"\red64\green200\blue64;" +
@"\red200\green64\blue64;}" +
@"\f0\fs30";
strRTF += @"{\cf3\b\fs36 " +
datStores.Store[iStore].StoreName + @"\par}";
strRTF += @"{" + datStores.Store[iStore].Street + @"\par}";
strRTF += @"{" + datStores.Store[iStore].City + ", " +
datStores.Store[iStore].State + " " +
datStores.Store[iStore].Zipcode + @"\par}";
strRTF += @"{\par\b Contact: \i " +
datStores.Store[iStore].ContactName + @"\par}";
strRTF += @"{\i " +
datStores.Store[iStore].ContactPhone + @"\par}";
strRTF += @"{Credit: ";
if (datStores.Store[iStore].CreditOK)
strRTF += @"{\cf3 Good}";
else
strRTF += @"{\cf4 Bad}";
strRTF += @"\par}";
strRTF += "}";
rtfStoreInfo.Rtf = strRTF;
}All store information is contained in the
datStores
object,
so displaying the details of a single store just requires looking at the index of that particular store. Since everything is available in the local DataSet object, there is no need to make another trip to the service to get a single store.
All other user interface projects are done using this same technique.
Conclusion
FoodMovers Distribution Company has decided to use a Service-Oriented Architecture approach to developing their new system. This architecture splits data and functionality into distinct layers, each of which can be maintained without adversely affecting the other layers.
All functionality is exposed by services delivered using international standards, rather than using proprietary data structures and machine-specific objects.
By providing a layered approach, the FoodMovers system is a truly scaleable, maintainable system.
In the coming sections in this project, I will continue to build this entire system one piece at a time to show the power of Visual Studio .NET. The most important part of the entire process will be the tasks that the Project Architect goes through to design the system and track the progress of the system as it is developed and deployed. (To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.)
The following sections will cover:
Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration
Old systems and new systems need to live together and most importantly communicate important business data with each another. However, programs do not always support and understand each other's data formats, communications protocols, and languages. In addition, some programs provide humans with a view of the system. These user interfaces are designed to be consumed by humans, not by other programs. Furthermore, programs that must communicate live in different organizations. How are we to integrate all of these systems?
In this section, I will discuss routes to accessing information services and the methods used to access them. Then we will develop the EDI and XML interfaces with our suppliers Good Old Soup Company and Hearty Soup Company, and the order interface for the stores.
Section 5, Extensions: Building Blocks for Extra Functionality
By now, we have created a system using the tools in Visual Studio .NET Enterprise Architect Edition. But we have just a basic system. What about security? What about attachments? What about updates? What about administration? We could develop these ourselves, but it would be nice if there was an alternative to custom development of these pretty standard pieces.
What we need is a concept of interchangeable parts for software development. This has been tried again and again with varying success. The C programming language came with a standard library (stdlib) of functions, such as printf and sscanf, that most C programmers gladly used rather than writing their own. Later, the Microsoft Foundation Class (MFC) for C++ development was made available to programmers working in an object-oriented Windows environment. Who wants to write a dialog box function if there is one available that works and does mostly what is needed?
In this section, I talk about the Web-service version of interchangeable parts. They take the form of standard extensions that are becoming available in the Web services universe. These extensions are part of Microsoft's Web Services Enhancements for Microsoft .NET (WSE) WSE extensions take the form of building blocks that can be integrated into a Web service quickly and easily. We will add attachments and security to our system to show how the building-block approach works.
Section 6, Going Live: Instrumentation, Testing, and Deployment
Once the architecture is designed and the code framework is created using Visual Studio, it is time to describe our plan for deployment and administration. In addition, there are several areas of implementation that need to be addressed before a robust, reliable, and secure architecture is deployed.
First, we need to develop a plan for "instrumentation." By placing "sensors" in our application, we can use instruments to provide a dashboard of our deployed system. Then we need to exercise the system in two areas, text and staging, before finally deploying the system in a production environment.
In this section, I detail a plan for exception and event management, and introduce the concept of "exception mining," which provides a method for wading through the information stream coming from the application to find events that need attention.