Book Image

Do more with SOA Integration: Best of Packt

By : Arun Poduval, Doug Todd, Harish Gaur, Jeremy Bolie, Kevin Geminiuc, Lawrence Pravin, Markus Zirn, Matjaz B. Juric, Michael Cardella, Praveen Ramachandran, Sean Carey, Stany Blanvalet, The Hoa Nguyen, Yves Coene, Frank Jennings, Poornachandra Sarang, Ramesh Loganathan, Guido Schmutz, Peter Welkenbach, Daniel Liebhart, David Salter, Antony Reynolds, Matt Wright, Marcel Krizevnik, Tom Laszewski, Jason Williamson, Todd Biske, Jerry Thomas
Book Image

Do more with SOA Integration: Best of Packt

By: Arun Poduval, Doug Todd, Harish Gaur, Jeremy Bolie, Kevin Geminiuc, Lawrence Pravin, Markus Zirn, Matjaz B. Juric, Michael Cardella, Praveen Ramachandran, Sean Carey, Stany Blanvalet, The Hoa Nguyen, Yves Coene, Frank Jennings, Poornachandra Sarang, Ramesh Loganathan, Guido Schmutz, Peter Welkenbach, Daniel Liebhart, David Salter, Antony Reynolds, Matt Wright, Marcel Krizevnik, Tom Laszewski, Jason Williamson, Todd Biske, Jerry Thomas

Overview of this book

<p>Service Oriented Architecture (SOA) remains a buzzword in the business and IT community, largely because the ability to react quickly is of utmost importance. SOA can be the key solution to this. The challenge lies in the tricky task of integrating all the applications in a business through a Service Oriented Architecture, and &ldquo;Do more with SOA Integration: Best of Packt&rdquo; will help you do just that with content from a total of eight separate Packt books. <br /><br />&ldquo;Do more with SOA Integration: Best of Packt&rdquo; will help you learn SOA integration from scratch. It will help you demystify the concept of SOA integration, understand basic integration technologies and best practices, and get started with SOA Governance. &ldquo;Do more with SOA Integration: Best of Packt&rdquo; draws from eight separate titles from Packt&rsquo;s existing collection of excellent SOA books:</p> <ol> <li>BPEL cookbook</li> <li>SOA Approach to Integration</li> <li>Service Oriented Architecture: An Integration Blueprint</li> <li>Building SOA-Based Composite Applications Using NetBeans IDE 6</li> <li>Oracle SOA Suite Developer's Guide</li> <li>WS-BPEL 2.0 for SOA Composite Applications with Oracle SOA Suite 11g</li> <li>Oracle Modernization Solutions</li> <li>SOA Governance</li> </ol> <p><br />The chapters in &ldquo;Do more with SOA Integration: Best of Packt&rdquo; help you to learn from the best SOA integration content in no less than eight separate Packt books. The book begins with a refresher of SOA and the various types of integration available, and then delves deeper into integration best practices with XML, binding components and web services from Packt books like &ldquo;Oracle SOA Suite Developer's Guide &ldquo; and &ldquo;BPEL Cookbook&rdquo;. Along the way you&rsquo;ll also learn from a number of real world scenarios. By the end of &ldquo;Do more with SOA Integration: Best of Packt&rdquo; you will be equipped with knowledge from a wide variety of Packt books and will have learnt from a range of practical approaches to really get to grips with SOA integration.<br /><br />Chapter listings with corresponding titles:</p> <ul> <li><strong>Preface</strong> - Dismantling SOA Hype: A Real-World Perspective (BPEL cookbook)</li> <li><strong>Chapter 1</strong> - Basic Principles: Types of integration (Service Oriented Architecture: An Integration Blueprint)</li> <li><strong>Chapter 2</strong> - Integration Architecture, Principles, and Patterns (SOA Approach to Integration)</li> <li><strong>Chapter 3</strong> - Base Technologies: Basic technologies needed for SOA integration (Service Oriented Architecture: An Integration Blueprint)</li> <li><strong>Chapter 4</strong> - Best Practices for Using XML for Integration (SOA Approach to Integration)</li> <li><strong>Chapter 5</strong> - Extending Enterprise Application Integration (BPEL cookbook)</li> <li><strong>Chapter 6</strong> - Service-Oriented ERP Integration (BPEL cookbook)</li> <li><strong>Chapter 7</strong> - Service Engines (Building SOA-Based Composite Applications Using NetBeans IDE 6) </li> <li><strong>Chapter 8</strong> - Binding Components (Building SOA-Based Composite Applications Using NetBeans IDE 6) </li> <li><strong>Chapter 9</strong> - SOA and Web Services Approach for Integration (SOA Approach to Integration)</li> <li><strong>Chapter 10</strong> - Service- and Process-Oriented Approach to Integration Using Web Services (SOA Approach to Integration)</li> <li><strong>Chapter 11</strong> - Loosely-coupling Services (Oracle SOA Suite Developer's Guide)</li> <li><strong>Chapter 12</strong> &ndash; Integrating BPEL with BPMN using BPM Suite (WS-BPEL 2.0 for SOA Composite Applications with Oracle SOA Suite 11g) </li> <li><strong>Chapter 13</strong> - SOA Integration&mdash;Functional View, Implementation, and Architecture (Oracle Modernization Solutions)</li> <li><strong>Chapter 14</strong> &ndash; SOA Integration&mdash;Scenario in Detail (Oracle Modernization Solutions)</li> <li><strong>Appendix</strong>: Bonus chapter - Establishing SOA Governance at Your Organization (SOA Governance)</li> </ul>
Table of Contents (20 chapters)
Do more with SOA Integration: Best of Packt
Credits
About the Contributors
www.PacktPub.com
Preface

Policies


The second piece of SOA governance is policies. Policies are the standards and guidelines that guide the staff associated with SOA towards the desired behavior. There are three key timeframes that have been addressed in the book:

  1. 1. During the processes associated with determining what IT projects to fund and execute: These processes are frequently associated with the broader subject of IT governance. While SOA governance should not introduce new governance processes associated with deciding what projects to fund and execute, policies associated with SOA governance should be included in the criteria. Surprisingly, vendors that offer tools in the SOA governance space do not have a term for this timeframe. We will refer to this as pre-project governance.

  2. 2. During project execution: In the vendor community, this is frequently referred to as design-time governance, however, the scope of SOA governance is certainly broader than just the design activities of a project. We will refer to this as project governance.

  3. 3. During the operation of production systems: In the vendor community this timeframe is referred to as run-time governance. As long as there are services running in your production systems, there is a need to govern the interactions between consumers and providers.

Pre-Project Governance

During this timeframe, the desired behavior is centered on a simple concept: building the right thing. The decision-making process that results in an approved and funded project is what sets the initial scope for the effort. This initial scope has a significant impact in determining the artifacts that will be produced. If the scope incorporates an enterprise viewpoint appropriately, an organization may quickly build up a library of services whose expectations for reuse and agility in times of change have been well thought out. If the scope does not incorporate enterprise viewpoints and needs appropriately, it creates a risk that the artifacts developed will struggle to provide value outside of the initial project.

Artifacts

In order to perform pre-project governance, the following artifacts are recommended:

  • Organization Chart

  • Business Domain/Capability Models

  • Business Process Models

  • Application Portfolio

  • Service Portfolio

The first artifact, the Organization Chart, is a key element in how projects get defined. Normally, relationships with areas of the business outside of IT are highly influenced by the organization of the IT department. Change the organization of the IT department and you may change the way relationships are formed and managed with the rest of the business. The same also holds true with the organization of the rest of the business. Change the organizational structure of the business, and it's likely that the IT organizational structure will change with it.

The reason why the organizational chart is so critical is because there is normally a direct relationship between budget and organization. If a particular department in the business is in control of their own budget, it could be a big barrier to either creating services that will be used by other departments, or in using services that are managed by other departments. In the first scenario, many budget owners may want other organizations to contribute to the cost of development or maintenance of services if they use them. In the second scenario, they may be asked by those organizations to contribute to their costs. In addition, the projects that are proposed may have dependencies that other organizations must deliver in order to be successful. Depending on the state of relationships in the organization, this could be a major hurdle to overcome.

The thing to remember when adopting SOA is that it will likely put pressure on the existing organizational structure. If it doesn't, there are number of possibilities. First, your organization may already be aligned along the concept of service consumers and service providers. Second, your organization may not let organizational boundaries get in the way of doing the right thing. Finally, it may be that your SOA efforts are working too much within the boundaries of the organization and not really creating the type of change possible with SOA.

The next three artifacts, Business Domain/Capability Models, Business Process Models, and the Application Portfolio are closely related. These are analysis artifacts that should be used to guide the decisions on what services should be created. At a minimum, as discussed in the Advasco example, some form of domain/capability models and business processes models should be leveraged. Business process models on their own create a risk for creating process silos; just as many organizations today have application silos based upon their application portfolios. When business process models and application portfolios are combined with a domain/capability model, the resulting combination can be a powerful tool in guiding the decisions on what services should be established.

The final artifact, the Service Portfolio, is frequently a catalog of services that have been built and are available in production, but it is much more powerful when it is used as a planning tool. When the organization has taken the time to perform business process analysis and business domain/capability analysis, an outcome should be the definition of key services that the organization needs to create to fully leverage SOA.

Policies for Pre-Project Governance

The following are questions or policies that you should consider in your pre-project governance efforts:

  • Has the proposed project identified candidate services?

  • Has the proposed project mapped candidate services to the business domains as represented in the business domain/capability models?

  • Has the proposed project reviewed the service portfolio against the list of candidate services?

  • Has an appropriate team of project stakeholders been identified based upon candidate services?

  • Has the proposed program/project been appropriately structured and scheduled to properly manage the development and integration of new and existing services?

  • Have all funding details been determined based upon the services proposed and the organizations involved?

  • Does the roadmap include the development of services with high potential for reuse?

  • Are projects encouraged to reuse existing services, where appropriate, based upon the business domain models and business objectives?

  • Are projects allowed to create redundancies, where appropriate, based upon the business domain models and business objectives?

  • Have existing systems been taken into account in the definition of the proposed services?

  • Is the organizational structure being reviewed on a regular basis based upon continued service analysis?

  • Does the organization have a clear approach to resolving service ownership models?

  • Are business processes properly leveraging services?

  • Does your service portfolio properly account for any globalization impact?

  • Does the service portfolio properly account for any planned areas for growth by acquisition?

Remember, this phase is the key timeframe to ensure that the organization builds the right services. If the decisions on what services to build are not investigated until after projects have been defined and funded, constraints will already exist that can be an impediment to building the right services.

Project Governance

During this timeframe, there are two major concerns: building the right services and building those services the right way. While the pre-project governance efforts are supposed to focus on building the right services, all too often, the proper artifacts are not available to make the necessary decisions at the time projects are approved. As a result, the necessary analysis and the decisions associated with the project architecture are performed within the project itself. This creates some risk, because the project does establish constraints that may get challenged by the results of analysis and architecture.

Like the architecture decisions that are made prior to project approval, the architecture and design activities must incorporate enterprise viewpoints and needs appropriately. For this purpose, the artifacts and policies mentioned as part of the pre-project governance all still apply. Besides these concerns, this timeframe is where an organization must ensure that services are built the right way.

Artifacts

In order to perform project governance, the following artifacts, besides those already mentioned in the pre-project governance section, are recommended:

  • Service Technology Reference Architecture

  • Service Security Reference Architecture

  • Service Blueprints and Frameworks

  • Standard Information Models and Schemas

These artifacts are only the ones that have an impact on SOA governance. Clearly, there are many other artifacts that are associated with general development governance, but those that are not specific to service development and integration are outside of the scope of this book.

Service Technology Reference Architecture

The first artifact is the Service Technology Reference Architecture. The purpose of this artifact is to ensure that the appropriate technologies are used for the service being developed. The document should first define the appropriate service types for the organization and then map those types to specific service technologies. The document should never have more than one type mapping to the same set of specific service technologies. If multiple types map to the same set of service technologies, it may create more confusion than clarity. While those service types may be useful in determining service ownership, this document is focused on determining service technologies. On the flip side of things, enterprises with centralized infrastructure will also want to ensure that there is only one set of service technologies for each service type. For example, there will always be a catchall type, like "General Business Service" that will map to an application server and its associated service framework. Is it good to have both a .NET platform running on Windows Server and a Java EE application server running on a Linux platform? The correct answer is it depends. If your organization was a Microsoft development shop, but then acquired another

company that was a Java development shop, it may make sense to have two general business service platforms. If both these groups still maintain their own data centers and operations staff after acquisition, there aren't too many issues. If, instead, the acquisition results in a consolidation of data centers and reduction in operational staff, justifying the continued operation of both platforms will be much more difficult.

Here are some service types that you should consider. Each one has the potential for being mapped to a specific set of service technologies.

  • Composite Services: These are services that are built by combining the output of two or more services, and aggregating the respective responses into a single response.

  • Automated (Orchestrated) Processes: These are services that are built by executing a fully automated sequence of actions as represented in a graphical process model. Technically speaking, a composite service is normally a specialized case of an orchestrated process, but if you choose to leverage very specific technologies for narrow service types, it is possible that you may need to define both service types and their associated service platform and technologies.

  • Integration Services: These are services whose whole purpose is to service enable some system that does not support the standards required to speak natively to service consumers.

  • Presentation Services: These are services that provide information in a presentation-friendly format. They don't actually produce the end user interface, but they provide the information in such a way that it is easily consumed by user interface technologies. This may require a slight variation of the standard service platform.

  • Management Services: These are services that expose management and administrative functionality. In the past, there have been management-specific technologies including SMNP and JMX. Today, there are increasing numbers of products that expose management interfaces as SOAP or XML/HTTP interfaces, however, the use of SMNP and JMX are still far more prevalent.

  • Information Services: These are services that are used to retrieve information from a variety of data sources, aggregating the results into a single response. Vendors typically market solutions in this space as data service platforms or data integration platforms. These services differ from composite services in that they are specifically designed to talk only to data sources on the back end, rather than any arbitrary web service.

  • Content Subscription Services: These are services that provide content feeds, typically adhering to feed syndication standards such as RSS and ATOM.

  • General Business Services: This is the catchall category for any service that doesn't fit into any of the other categories.

When mapping these service types to technologies, the following things must be considered:

  • Service Platform: The technology decisions start with an underlying hosting platform. Typical choices include a Java Application Server, Windows Server, a Data Integration/Services Platform, a BPEL-based orchestration platform, and some ESB offerings.

  • Service Communication Technology: The communication technologies are the protocol used to interact with the service. This includes both the message format used, such as POX, SOAP, or RSS, and the underlying message transport (HTTP, WebSphere MQ, or Tibco JMS).

For example, using the service types listed, a mapping to service platforms and communication technologies could look like this:

Beyond the mapping of service types to service technologies, the service reference architecture must also address policies associated with the message itself that go beyond the core communication technology. This can include naming conventions for URLs, namespace conventions for the XML messages, references to a canonical model, or more. It must also provide policies on how the non-functional capabilities associated with services interactions will be provided by the underlying infrastructure. This includes:

  • Security (see Service Security Reference Architecture)

  • Routing and load balancing

  • Transport and mediation

  • High availability and failover

  • Monitoring and management

  • Versioning and transformations

These non-functional capabilities are a key aspect of SOA, because they are the foundation of run-time governance. At the same time, they must be factored into the design-time decisions, because if the development teams don't utilize the technology appropriately, the ability to enforce run-time governance policies will disappear. If a team chooses to build its own security implementation or hard code rules for versioning into the implementation, it will likely require a code change and the associated release process to implement and enforce the policies in a service contract for a new consumer, increasing the time and effort required to bring new consumers online. We want to strive to implement these non-functional capabilities through policy-driven infrastructure that is configured, rather than coded.

Service Security Reference Architecture

The next artifact is the Service Security Reference Architecture. This can be included as a subset of the Service Technology Reference Architecture, or created as a standalone artifact. Regardless of the approach, there are two questions that must be answered by the reference architecture:

  1. 1. What security policies must be enforced?

  2. 2. What technologies are used for enforcing those policies?

The first question must guide the developers of services and their consumers on security policies for authentication, authorization, encryption, digital signatures, and threat prevention.

There are two components to authentication. The first is a simple policy that states whether or not identity is required on all service invocations. It is strongly recommended that this is the case. With this policy in place, your organization must then specify what constitutes identity. It can be a user's name, an application's name, a company's name, or any combination of them. It may include additional attributes such as group membership or more. The one thing it should not be is anonymous. Companies must assume that in the future many services will comprise a particular interaction, with multiple teams involved. If identity on messages does not exist, it makes it extremely difficult to enforce run-time governance, since identity is what ties an interaction back to a service contract, as well as making the debugging process much more costly should problems occur in production.

With this identity specified, the next step is to decide when authentication is required on service invocations. Unlike a web application that faces an end user, one cannot assume that a simple challenge can be issued to allow a consumer to specify credentials on demand. When dealing with system-to-system interactions, it is very unlikely that the service consumer has access to the password of the user associated with the current thread. If passwords are not issued, then there needs to be a way to ensure that the credentials passed on the request have not been forged. For internal consumers, many organizations choose to only validate that credentials are from a trusted source, rather than performing an authentication on every request. This, of course, assumes that an authentication was performed at the user interface of the system.

Authorization, on the other hand, should always be performed on all service requests. Besides stating this policy, the reference architecture must also cover whether role-based or user-based authorization will be leveraged. Role-based authorization is generally preferred, as the management of policies associated with individual users can quickly become an impossible task for large organizations.

At first glance, you may first think that encryption is only concerned with the protection of sensitive data from systems that have no need for accessing it, but the certificate exchange associated with bi-directional SSL communications can also be leveraged to ensure that service endpoints only accept connections from authorized points on the network. For example, if you are leveraging an XML appliance as a policy enforcement point, it is possible for a rogue consumer to circumvent the appliance by entering the hostname or IP address of the service endpoint directly. By forcing a bi-directional certificate exchange between the XML appliance and the service endpoint in order to exchange service messages, these rogue consumers would be prevented from accessing the service, since they would not have the certificate required.

Digital signatures are a means of ensuring that the service messages have not been modified en route. While this is typically not used with internal consumers, it is frequently used when dealing with external consumers or external service providers, especially when those requests are sent over the open Internet. It may also be required on internal messages to ensure that some portion of the message, such as identity credentials, is coming from a trusted source.

Finally, threat protection is concerned with preventing consumers from exploiting a variety of techniques that can compromise the security of the service provider. Common examples include checking for SQL injection or detecting harmful XML messages, such as ones that are compliant with the XML schemas involved, but exploit recursion or buffer overflows to attempt to crash the server that will process the message.

The second question deals with the specific technologies associated with enforcing those policies. This begins with the service messages themselves. While the reference architecture has previously specified what constitutes identity, it must now say how that identity is represented on service messages. Will it be issued in plain text in a standard location? Will it be specified using transport-specific mechanisms (for example, HTTP headers) or as part of the actual message payload? The most flexible mechanism available today is only associated with SOAP messages, and that is the WS-Security framework. This framework establishes a standard location in the SOAP header for credentials, and provides a framework for specifying profiles that allow different token types to be used. Standard profiles include the Username profile (simple plaintext and hashed credentials), the SAML (Security Assertion Markup Language) Token Profile, the X.509 Token Profile, the Kerberos Token Profile, and the REL (Rights Expression Language) Token Profile. Not all products support all of the profiles, so be sure to dig deeper when a product claims to have WS-Security support. It may only support one or two of the token profiles, or potentially none at all, since WS-Security also specifies standard ways of handling message encryption and digital signatures.

Once the policies for the messages themselves have been set, the reference architecture must address the specific infrastructure associated with the enforcement of those policies. Will a standalone gateway, such as some ESBs or an XML appliance, be used to enforce authorization policies? If so, then all service traffic must be routed through those gateways, and techniques must be leveraged to prevent rogue consumers from circumventing those gateways. While a policy may say that SAML assertions must be used, how does a developer of a service consumer place those assertions on a message? Are they responsible for writing the code to do so, or will a framework be provided for them? If a framework is used, must the developer explicitly reference it, or will the insertion of credentials happen implicitly? The security reference architecture must provide enough information so that the developers of a service consumer or service provider know exactly what is required of them in their coding efforts versus what must be simply configured as part of establishing a service contract.

Service Blueprints and Frameworks

When trying to guide people to the desired behavior, a very powerful technique is to simply give them examples of it. This is the role of service blueprints. The reference architectures discussed can contain a large number of policies that can seem daunting to a developer, creating the risk that they get ignored. By creating blueprints that show a common pattern, and preferably the simplicity associated with following the pattern, developers are more likely to follow the guidelines.

For example, if your organization will be exposing external services, a challenge may be the propagation of identity to back-end systems, since identity checks may occur in a DMZ or through federation, yet the back-end services have no access to those identity stores. A blueprint can be created that demonstrates how that original identity flows through the connections required, and what code is required at each step (if any) to ensure it happens.

Blueprints also provide a convenient way of demonstrating how to choose within alternate strategies for a given service type. For example, a general business service may be accessible by either asynchronous messaging via message-oriented middleware, or via a synchronous messaging approach over HTTP. When should one be used over the other, and what are the differences in how to leverage them?

The second key piece of this artifact is service frameworks. While the underlying service platforms typically include frameworks for doing HTTP communication, XML message processing, and SOAP processing, it doesn't mean it's easy to use. This is especially true for security. While many frameworks can construct a SOAP message through the use of a wizard or code generator and one or two lines of code, adding a SAML assertion to that SOAP message can be much more complicated. A common theme with governance is that if you make compliance the path of least

resistance, you're more likely to get compliance. If you have a policy that all service messages must contain identity, providing a framework so this can be done in one or two lines of code, or zero if possible, will ensure that your developers are compliant with the policy, versus requiring them to research SAML libraries and write many lines of code to make it happen.

Standard Information Models and Schemas

The final artifacts that are imperative for design-time governance are standard information models and schemas. A common goal associated with SOA is reuse, but reuse becomes much more difficult when common information is not represented consistently. For example, if there are two services that both need to deal with Account information, but both services represent that information in different ways, any consumer that needs to use both of those services must now implement logic that translates between the two definitions of Account.

In order to prevent this situation from becoming rampant, an organization must take this decision out of the hands of individual projects, and put it into the hands of a group with broader focus, whether that is the entire enterprise, or some larger domain. It is important that this group (or groups) understands that the goal is not to come up with the one universal representation that everyone agrees on, because odds are it doesn't exist. Rather, the goal is to minimize the number of representations for common information. If it can be one, that's the optimal point, but if it winds up being three or four, and we provide mechanisms for easily translating between them that is a big improvement over leaving it in the hands of every individual service consumer or service provider.

One factor that can play a key role in defining these models is the existence of industry standards for the information domains involved. For example, there are many messaging standards that exist for various verticals, such as financial services (SWIFT, ISO-20022), healthcare (HIPAA), and insurance (ACORD), which have been created for the explicit purpose of information exchange in business-to-business interactions. These schemas can be an excellent starting point for establishing internal standards, and should definitely be leveraged when exposing services externally. A second factor is the organization's use of third-party solutions, such as a major ERP product like SAP or Oracle. These systems often come with information schemas pre-packaged. The more customization to these schemas that you do, the more difficult it may be to upgrade that infrastructure. As a result, these schemas may also be a good starting point for establishing your own standards. The best scenario is where an industry standard exists that is independent of any third-party application, but supported by those third-party applications. That maintains independence from the vendor solution, yet leverages a schema with broad adoption in the industry.

Just as with the security reference architecture, the information models and schemas must make sure that they not only provide the standard models, but sufficient instructions on how to utilize them in service traffic. For example, if there was a common XML schema file for Account information called account.xsd, organizations should not allow individual projects to copy that file into their own projects. Rather, their projects should reference that schema file from some location that will be universally available. That allows the schema to be maintained and updated centrally, rather than having to go out to each individual project and update the Account definition one at a time.

Policies for Project Governance

The following are questions/policies that you should consider in your project governance efforts, in addition to all those that were specified in the pre-project governance if not enforced at that time:

  • Have all services been mapped to an appropriate type?

  • Are the service technologies chosen for each service consistent with the type to technology mapping specified in the reference architecture?

  • Does the service use the standard communication technologies specified in the reference architecture?

  • Does the service interface comply with all naming conventions for URLs as specified in the reference architecture?

  • Does the service interface comply with all namespace conventions as specified in the reference architecture?

  • Does the service interface properly reference all external schema definitions, rather than copying them locally?

  • Does the service interface use the standard schema definitions properly?

  • Do external facing services only expose industry standard schemas, where they exist?

  • Is the service interface compliant with industry standards, such as WS-I?

  • Does the service require identity on its messages?

  • Are all service consumers properly specifying identity on outgoing requests?

  • Have appropriate authorization policies been established for the service?

  • Is the service communication infrastructure being leveraged appropriately?

  • Are all internal consumers properly leveraging the standard service frameworks?

  • Are all internal providers properly leveraging the standard service frameworks?

  • Is all sensitive information properly encrypted according to the service security policies?

  • Have service contracts been established between all consumers and providers?

  • Are all aspects of the service contract fully specified including message schemas, versions, delivery schedule, points of contact, and expected usage rates?

  • Have all services been thoroughly and adequately tested, with testing results available to service consumers, if required by the service contract? For internal consumers, testing results should always be available to help counter the natural tendency for developers to resist using things they didn't personally write.

  • Have service managers been assigned for all new services?

  • Are the service boundaries identified in the solution consistent with the business domain models?

  • Has the solution incorporated existing services appropriately?

  • Has the solution properly published information about new services into the Service Registry/Repository?

  • Has the solution avoided creating redundant services that were not appropriate according to the business domain models?

Remember that all of these policies are in addition to policies that are already being enforced as part of your normal project governance process. Policies around coding conventions, project structures, code repositories, unit testing, integration testing, performance and capacity testing, and so on still apply.

Run-time Governance

During this timeframe, the major concern is the correct behavior of service consumers and service providers so that the infrastructure remains operational and in a healthy state at all times. There are two keys to this. The first is an accurate understanding of the role of infrastructure in the run-time environment at the time solutions are built, and second, is the appropriate use of the run-time infrastructure to enforce the policies established in the service contract. Unlike the other timeframes, there is really only one artifact that is used to describe the run-time behavior, and that is the service contract. Before covering that, let's first look at a conceptual view of the infrastructure and the guidance that must be given to teams during their design processes.

Policy-Driven Infrastructure

At its core, the run-time infrastructure consists of three things: infrastructure used to execute the logic associated with the service consumer, infrastructure used to execute the logic associated with the service provider, and infrastructure used to allow communication between the two. Earlier, it was stated that a goal should be to minimize the ways in which a service consumer and a service provider can communicate. Through the use of reference architectures, the policies are created that define these standards. With these standards in place, there are three core principles that should be adopted:

  • Service consumers are responsible for ensuring that all messages they send are compliant with the service communication standards.

  • Service providers are responsible for ensuring that they expose endpoints that can consume messages that are compliant with the service communication standards.

  • The service communication infrastructure will enforce all non-functional capabilities for all messages that are compliant with the service communication standards, including mediation between those standards.

This results in a logical picture like this:

This clearly leads to a simple statement about run-time behavior: all service messages are compliant with the service communications standards. Unfortunately, that is seldom the case. If your organization leverages third-party products, it is unlikely that they will be compliant with all of your standards out-of-the-box. The key principle to follow, however, is that it is the responsibility of the non-compliant party to find a way to be in compliance, not the service communications infrastructure. Previous integration approaches, such as EAI technology, attempted to allow the endpoints to do whatever they wanted, and mediate between all of this in the middle. This quickly ran into problems, as precious CPU cycles were spent doing transformations and other activities to tie systems together which degraded

the performance of other transactions that required little in mediation. The right approach is to push these adapters out to the endpoints, in an approach like this:

In this approach, it is the responsibility of the service consumer or service provider to put an adapter into their processing path prior to sending messages out through the service communications infrastructure. This can still involve EAI technology, but the use of that technology only accepts message traffic that is associated with the non-compliant system, versus dealing with all message traffic. These adapters can also be leveraged in process at the consumer or provider, such as using a third-party SOAP library within a Java execution environment that doesn't natively provide one.

The final adjustment to this diagram is to understand that the standards may be significantly different when dealing with an external party, whether that party is a service consumer or a service provider. This results in the following picture:

In this picture, an external service consumer communicates with an external gateway using industry standard communications technologies, both for the underlying transport as well as the messaging schemas. It is the responsibility of the external service consumer to be compliant with these industry standards.

The external gateway is responsible for the initial enforcement of security policies, as well as any mediation from the industry standards to the internal standards. Again, this encompasses both transport and messaging schema. If multiple versions of an industry standard exist, and are supported, the external gateway must be capable of transforming any of them into the approved internal corporate standards. If the internal standards are identical to the external standards, clearly this step is unnecessary. As a result, a best practice is to try to leverage industry standards for internal message formats as well, although it is recognized that this may require extending the standard for additional internal information.

Inside the corporate data center, all message traffic through the service communications infrastructure must be compliant with the corporate standards. Mediation within the corporate standards is a capability of the service communications infrastructure. This can include moving a message from an HTTP transport to a JMS-based transport (if both are allowed), mediating between POX/HTTP and SOAP/HTTP, and so on. It also includes mediation required for versioning of those standards. If the schema for an information entity has changed, but the use of a previous version is still allowed, the infrastructure should handle transformations between the two, when it is required.

It is the responsibility of the internal service consumer and the internal service provider to ensure that they are compliant with at least one of the standards for service communication. The endpoints do not need to support all of them, but they must support at least one. Where a service consumer or provider is non-compliant, they are responsible for employing an adapter, whether in process or an external entity for providing a compliant interface. Clearly, native compliance is preferable, as this prevents the proliferation of "glue" infrastructure used to tie everything together. The policies around service communication technologies should be used as part of your technology evaluation process for third-party packages to prevent this need as much as possible.

Service Contracts

With the standards for communication established, the infrastructure can now focus on the enforcement of the policies within service contracts. Some policies may be common to all service contracts, consistent with the policies that are in place in the reference architectures. For example, if the service technology reference architecture states that only XML payloads are allowed, this should be reflected in all service contracts. Any service message received by the communications infrastructure that does not contain an XML payload should be rejected.

While many of these policies can, and should be tested at development time, they must also be enforced at run-time, whether to deal with unknown bugs in the consumer or provider, protection against rogue applications that didn't follow appropriate testing procedures, or in case of external consumers, because we don't know what testing was performed at development time.

In addition, there are behaviors that cannot be handled by development time testing, typically associated with SLA enforcement point. A capacity test can be done to verify that the system behaves properly when 1,000 users of a service consumer are sending simultaneous requests, but this can't account for a mistake in analysis of the user base. If the real number is 10,000 users, how do we prevent the system from being overwhelmed? Each individual message may be fully compliant with all standards, but it's the fact that a much higher rate of message traffic is occurring that can create the problem.

The service contract must specify the expected usage by the consumer in an appropriate level of detail, as well as the expected response time from the provider when the system is behaving as expected. Additionally, thresholds for both usage by the consumer and response time from the provider must be established. Exceeding these thresholds results in notifications, allowing corrective action to be taken before a problem occurs, or a switch to a mode of self-preservation, where requests will be rejected in order to protect the back end service implementation from a complete failure.

The infrastructure must be capable of changing the policies associated with a contract, or establishing new contracts, without requiring a deployment of a new version of the service or the consumer solely for that reason. It is common that a change in contract may accompany an associated functionality change in a service consumer, service provider, or both, but it is the functionality change that drives the implementation change and the contract change. We never want a contract change to require an implementation change.

The service contract must also address reporting policies for service usage. The desired behavior at run-time should never be to deploy a service in production and then ignore it unless the system tells us otherwise. Usage reports should be provided to each consumer, as well as to the service provider. Analysis of these reports may trigger a change in policy, or even a need for a capacity modification, if the reports indicate the usage characteristics are changing.

Policies for Run-Time Governance

The following are questions or policies that you should consider in your run-time governance efforts:

  • What is the normal rate of requests for a given service consumer?

  • What is the expected response time for the service provider for typical requests from that service consumer?

  • What actions are taken when the request rate for a given service consumer exceeds each of the agreed upon thresholds?

  • What actions are taken when the response time for a given service consumer exceeds each of the agreed upon thresholds?

  • Are there any time restrictions on when a particular consumer can access a service?

  • For services with multiple entry points via different technologies (for example, SOAP/HTTP, XML/HTTP, SOAP/JMS), is policy enforcement defined and consistent (if needed) for each entry point?

  • Are all security policies configured and being enforced?

  • Are service requests routed to the appropriate version for each consumer, or have appropriate transformations been applied, preserving backward compatibility?

  • Are all service messages being logged appropriately per any enterprise auditing requirements?

  • Are all service messages being logged and preserved for the purpose of debugging?

  • Are usage metrics being properly collected?

  • Are usage reports being generated and distributed appropriately?

  • Are the recipients of these reports properly reviewing them and accounting for any discrepancies in behavior?

  • Are all policies associated with message structure being enforced by the run-time infrastructure?

  • Are non-compliant messages being logged, rejected, and reported to appropriate personnel?

Remember that while the infrastructure can enforce many of the run-time governance policies, there is still a need to have people involved. If the staff deploys services into production and then forgets about them, there is significant risk of problems down the road. The lifecycle of a service consumer and a service provider must be managed from inception to decommissioning, not just from inception to production deployment.