Book Image

Applied Architecture Patterns on the Microsoft Platform

Book Image

Applied Architecture Patterns on the Microsoft Platform

Overview of this book

Table of Contents (20 chapters)
Applied Architecture Patterns on the Microsoft Platform Second Edition
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Technology evaluation dimensions


In the process of evaluating technologies, we will build criteria in the following four dimensions:

  • Organizational context: Solutions built to function in an organization should be aligned with business needs and directions. Organizational context is usually provided by the enterprise architecture that builds general IT principles and strategies for the organization.

  • Solution design: These criteria are relevant to the process of designing the system. The design is typically the step that starts after the core architecture is completed. In the Agile development, the design starts sooner, but the architecture keeps the backlog of unfinished business (the so-called architectural debt) that is being worked on over time.

  • Solution implementation (development, testing, and deployment): These criteria focus on the next stages of solution delivery from the completed design to the deployment in production. Product development might not have a production deployment stage per se; rather, it would have a need to create installation programs and packaging.

  • Operations: Surprisingly, this is the area that is neglected the most while the architecture is developed. This is because it is all about the business value that the solution is supposed to provide and was built for. A very typical example is giving low priority to buying (or developing) administration tools. We have seen organizations that buy sophisticated and very expensive monitoring tools but don't provide proper training to their staff, and the tools end up simply not being used. As the most ridiculous example, I remember an organization providing SaaS services that allowed intruders to use a back door to their FTP server for eight months simply because they did not use proper monitoring tools.

Organizational context

Organizational context provides us with a big picture. Every organization has its set of principles, implicit or explicit, and the task of the solutions architect is to build systems aligned with these principles. The following table lists some major principles that are typically developed by the organization enterprise architecture team:

Principle

Description

Consider process improvement before applying technology solutions

Although it may sound important, this principle is often not considered. Sometimes, architects (or businesses) rush into building a solution without looking into a possibility to completely avoid it. We put it as the first consideration, just as a warning sign.

The solution should satisfy business continuity needs

Some businesses are more critical than others. A bank, for example, should function even if a flood hits its data center. Disaster recovery is a major part of any solution.

Use vendor-supported versions of products

All Microsoft products (or those of any vendor) have to be supported. Microsoft typically provides at least 10 years of support for its products (including 5 years of mainstream support or 2 years after the successor product is released, whichever is longer).

Automate processes that can be easily automated

Take advantage of information systems; however, think of eliminating unnecessary tasks instead of automating them.

Design for scalability to meet business growth

This is one of the essential points of alignment between business and IT. However, look into possibilities of building flexible solutions instead of large but rigid ones.

Implement adaptable infrastructure

Infrastructure must be adaptable to change; minor business changes should not result in complete platform replacement but should rather result in changing some components of the system.

Design and reuse common enterprise solutions

In the modern enterprise, especially in service-oriented architecture (SOA) environments, enterprise solutions should produce reusable components.

Consider configuration before customization

Changing the configuration takes less technical skills as compared to customizing the solution. It also produces the result much quicker.

Do not modify packaged solutions

Packaged solutions maintained by a vendor should not be modified. Times of hacking into third-party packages are gone.

Adopt industry and open standards

From the initial assessment and inception phase of the project, you should consider industry and open standards. This will save you from re-inventing the wheel and will bring huge advantages in the long run.

Adopt a proven technology for critical needs

Many enterprises approach technologies from a conservative standpoint. Some, for example, suggest that you should never use the first couple of versions of any product. Whether you want to go with extremes depends on the organization's risk tolerance.

Consider componentized architectures

Multi-tier architecture enables separating concerns in different tiers, allowing faster development and better maintenance. A service-oriented architecture paradigm emphasizes loose coupling.

Build loosely-coupled applications

Tightly-coupled applications might seem easier to develop, but—even when it is true—architects should consider all phases of the solution cycle, including maintenance and support.

Employ service-oriented architecture

Service-oriented architecture is not just a technological paradigm; it requires support from the business. Technologically, SOA services mirror real-world business activities that comprise business processes of the organization. Employing SOA is never simply a technological decision; it affects the entire business.

Design for integration and availability

Every solution might require integration with other solutions. Every solution should provide availability according to the organization's SLAs.

Adhere to enterprise security principles and guidelines

Security, being one of the most important nonfunctional requirements, has to be consistent across the enterprise.

Control technical diversity

Supporting alternative technologies requires significant costs, and eliminating similar components also increases maintainability. However, limiting diversity also sacrifices some desirable characteristics, which may not be ideal for everybody.

Ease of use

Just following the Occam's razor principle, simplify. Remember, at the end of the day, all systems are developed for end users; some of them might have very little computer knowledge.

Architecture should comply with main data principles (data is an asset, data is shared, and data is easily accessible)

These three main data principles emphasize the value of data in the enterprise decision-making process.

Architecture should suggest and maintain common vocabulary and data definitions

In a complex system with participants from business to technical people, it is critical for experts with different areas of expertise to have a common language.

Solution design aspects

In this section, we look at the characteristics relevant to the overarching design of a solution. The list is certainly not exhaustive, but it provides a good basis for building a second dimension of the framework.

Areas of consideration

Description

Manageability

  • Does the system have an ability to collect performance counters and health information for monitoring? (See more about this consideration in the Solution operations aspects section).

  • How does the system react to unexpected exception cases? Even if the system graciously processes an unexpected error and does not crash, it might significantly affect the user experience. Are these exceptions logged at a system level or raised to the user?

  • How will the support team troubleshoot and fix problems? What tools are provided for this within the system?

Performance metrics

Good performance metrics are reliable, consistent, and repeatable. Each system might suggest its own performance metrics, but the most common are the following:

  • Average/max response times

  • Latency

  • Expected throughput (transactions per second)

  • Average/max number of simultaneous connections (users)

Reliability

  • What is the expected mean time between service failures? This metric can be obtained during testing, but the business should also provide some expectations.

  • How important is it for the system to be able to deal with internal failures and still deliver the defined services (resilience)? For some industries, such as healthcare or finance, the answer would be "critical". Systems in these industries are not supposed to be interrupted by major disasters, such as a flood or a fire in the data center.

  • Should the failure of a component be transparent to the user? If not, then what level of user impact would be acceptable (for example, whether the session state can be lost)? In the old days, a user often received some cryptic messages in case of an error, such as "The system has encountered an error #070234. Please call technical support". This is not acceptable anymore; even 404 errors on the Web are becoming more user-friendly.

  • What's the expected production availability? The following is a table of the availability "nines" and corresponding downtown times:

  • What is the acceptable duration of a planned outage? Is it also important to know what the planned outage windows are, whether they should be scheduled every week or every month, and what maintenance windows are required for each operation (service upgrade, backup, license renewal, or certificate installation)?

  • What are the assurances of a reliable delivery (at least once, at most once, exactly once, and in order)?

Recoverability

  • Does the system support a disaster recovery (DR) plan? The DR plan is typically developed during the architecture stage. It should include the DR infrastructure description, service-level agreements (SLAs), and failure procedures. The system might seamlessly switch to the DR site in the case of a major failure or might require manual operations.

  • What are the system capabilities of the backup and restore?

  • What is the acceptable duration of an unplanned outage? Some data losses in case of an unplanned outage are inevitable, and architects should also consider manual data recovery procedures.

Capacity

  • What are the data retention requirements, that is, how much historical data should be available? The answer to this question depends on the organizational policies and on the industry regulations as well.

  • What are the data archiving requirements, that is, when can the data be archived? Some industry regulations, for example, auditing, might affect the answer to this question.

  • What are the data growth requirements?

  • What are the requirements for using large individual datasets?

Continuity

  • Is there a possibility of data loss, and how much is the loss? Very often, businesses would answer with a "no" to this question, which creates a lot of grief among architects. However, the proper question should sound: "In the case of a data loss, how much data can be restored manually?"

Security

  • What are the laws and regulations in the industry with regards to security? Organization security policies should be aligned with those in the industry.

  • What are the organization internal security policies? What are the minimal and the optimal sets of security controls required by the organization? The security controls might require zoning, message- or transport-level encryption, data injection prevention (such as SQL or XML injection), data sanitizing, IP filtering, strong password policies, and others.

  • What are the roles defined in the system? Each role should have a clear list of actions that it can perform. This list defines authorization procedures.

  • What are the login requirements, and particularly, what are the password requirements?

  • What are encryption requirements? Are there any certificates? In case of integration with other internal or external systems, is mutual certification required? What are the certificate-maintenance policies, for example, how often should the certificates be updated?

  • What are the authentication and authorization approaches for different components of the system?

Auditability

  • What are the regulations in the industry that are affecting the audit? Which data should be available for the audit? Which data should be aggregated?

  • What data entities and fields should be audited?

  • What additional data fields should be added for the audit (for example, timestamps)?

Maintainability

  • What architecture, design, and development standards must be followed or exclusions created for? Maintaining the code is a tough task, especially maintaining bad code. Proper documentation, comments inside the code, and especially following standards helps a lot.

  • Which system components might require rapid changes? Those components should be independent from other components; their replacement should affect the rest of the system minimally.

Usability

  • Can the system in its entirety support single sign-on (SSO)? Single sign-on today becomes a feature expected by most of the users and a mandatory requirement by many organizations.

  • How current must the data be when presented to the user? When a data update happens, should the end user see the changes immediately?

  • Are there requirements for multi-lingual capabilities? Are they possible in the future?

  • What are the accessibility requirements?

  • What is the user help approach? User help information can be delivered in many ways: on the Web, by system messages, embedded in the application, or even via a telephone by the support team.

  • Can the system support the consistency of user messages across all presentation layers? For example, how does the system handle messages delivered by the Web and the mobile application presentation layers? They cannot be the same because of the mobile application limitations; how should they be synchronized?

Interoperability

  • What products or systems will the target system be integrated with in the future?

  • Are there any industry messaging standards? In the world of web services, many standards have emerged. The most common interoperability set of standards is the WS-I set of standards.

Scalability

  • What is the expected data growth?

  • What is the expected user base growth?

  • What is the new business functionality that is anticipated in the future?

  • Can the system be scaled vertically (by increasing the capacity of single servers) and horizontally (by adding more servers)?

  • What are the system load balancing capabilities?

Portability

  • Are there any requirements to support the system on different platforms? This question becomes very important today, especially in the world of mobile and online applications. Several major mobile platforms as well as several browsers are competing in the market.

Data quality

  • What are the data quality requirements (deduplication or format standardization)?

Error handling

  • Failures within the system should be captured in a predictable way—even unpredictable failures.

  • Failures within connected systems or system components should be handled consistently.

  • "Technical" error messages should not be exposed to users.

  • What are the logging and monitoring requirements? Capturing errors is essential for the analysis and improving the system quality.

Solution implementation aspects

Should design, coding, and other standards be automatically enforced through tooling, or is this a more manual process? Should the source control system be centralized and integrated in a continuous integration model? Should the programming languages be enforced by an organizational policy or be chosen by developers? All these questions belong to the realm of solution delivery. If architects select a technology that cannot be delivered on time or with given skillsets, the entire solution will suffer.

Solution delivery also very much depends on the project management approach. In a modern Agile world, delivery technologies should be chosen to allow for rapid changes, quick prototyping, quick integration of different components, efficient unit testing, and bug fixing. Agile projects are not easier or cheaper than Waterfall projects. In fact, they guarantee rapid and quality delivery but at a cost. For example, it is well known that Agile projects need more skilled (and therefore, more expensive) developers. Some estimate the number of required senior developers is up to 50 percent of the team.

The following table presents some considerations that affect the technology selection:

Areas of consideration

Description

Are skilled developers available in the given timeframe?

  • As mentioned previously, rapid quality delivery requires a bigger number of skilled resources. If the technology selected is brand new, it would not be easy to acquire all necessary resources.

What are the available strategies for resourcing?

  • There are several strategies for resourcing in addition to in-house development: outsourcing (hiring another organization for the development and testing), co-sourcing (hiring another organization to help deliver the solution), in-house development using contract resources, and any mixture of the above.

Based on the delivery methodology, what environments have to be supported for the delivery?

Typically, there are several environments that are required to deliver a complex solution to the production stage. Some of them are as follows:

  • Sandbox environment: This is the environment where developers and architects can go wild. Anything can be tried, anything can be tested, the environment can be crashed every hour—and it should definitely be isolated from any other environment.

  • Development environment: Usually, every developer maintains his/her own development environment, on the local computer or virtualized. Development environments are connected to a source control system and often in a more sophisticated continuous integration system.

  • Testing environments: Depending on the complexity of the system, many testing environments can exist: for functional testing, for system integration testing, for user acceptance testing, or for performance testing.

  • Staging or preproduction environment: The purpose of this environment is to give the new components a final run. Performance or resilience testing can also be done in this environment. Ideally, it mimics a production environment.

  • Production and disaster recovery environments: These are target environments.

  • Training environment: This environment typically mimics the entire production environment or its components on a smaller scale. For example, the training environment does not require supporting all performance characteristics but requires supporting all system functionalities.

Is environment virtualization considered?

  • Virtualization is becoming more and more common. Today, this is a common approach in literally all medium and large organizations.

Is cloud development considered?

  • Cloud development (supported by Microsoft Azure) might be considered if the organization does not want to deal with complex hardware and infrastructure, for example, when it does not have a strong IT department. Cloud development also gives you the advantage of quick deployment, since creating environments in Azure is often faster than procuring them within the organization.

What sets of development and testing tools are available?

  • What programming languages are considered?

  • What third-party libraries and APIs are available?

  • What open source resources are available? Open source licensing models should be carefully evaluated before you consider using tools for commercial development.

  • What unit testing tools are available?

  • What plugins or rapid development tools are available?

Does development require integration with third-party (vendors, partners, and clients)?

  • Will third-party systems test/stage environments be required for development?

  • Are these systems documented, and is this documentation available?

  • Is there a need for cooperation with third-party development or support teams?

In case of service-oriented architecture, what are the service versioning procedures?

  • Can a service be upgraded to a new version seamlessly without breaking operations?

  • Can several versions of the same service operate simultaneously?

  • How do service consumers distinguish between the versions of the same service?

What is the service retirement procedure?

  • Can a service be retired seamlessly without breaking operations?

  • How does it affect service consumers?

What service discovery mechanism is provided?

  • Is a service registry available within the proposed technology?

  • Is an automated discovery available?

  • Is a standard discovery mechanism available, such as UDDI?

Solution operation aspects

Even after we have satisfied our design and implementation needs, we absolutely must consider the operational aspects of the proposed solution. Although the project delivery team inevitably moves on to other work after a successful deployment, the actual solution might remain in a production state for years. If we have a grand architecture that is constructed cleanly but is an absolute nightmare to maintain, then we should consider the project failed. There are many examples of solutions like this. Consider, for instance, a system that provides sophisticated calculations, requires high-end computers for this purpose, but has a small number of servers. If an architect suggests that the organization should utilize Microsoft System Center for monitoring, it would create a nightmare for the operations team. The System Center is a very large tool, even formal training for the team would take a week or two, and the learning curve would be very steep. And at the end of the day, maybe only 5 percent of the System Center capabilities will be utilized.

Operational concerns directly affect the solution design. These factors, often gathered through nonfunctional requirements, have a noticeable effect on the architecture of the entire system.

Areas of consideration

Description

Performance indicators provide essential information about the system behavior. Can they be captured and monitored?

  • What exactly are the metrics that can be monitored (the throughput, latency, or number of simultaneous users)?

  • What are the delivery mechanisms (file, database, or SNMP)?

  • Can the data be exported in a third-party monitoring system (Microsoft SCOM, VMware Hyperic, or Splunk)?

Can the hardware and virtual machine health status be captured and monitored?

  • What exactly are the metrics that can be monitored (the CPU usage, memory usage, CPU temperature, or disk I/O)?

  • What are the delivery mechanisms (file, database, or SNMP)?

  • Can the data be exported in a third-party monitoring system (Microsoft SCOM, VMware Hyperic, or Splunk)?

In the case of a service-oriented architecture, can the service behavior be captured and monitored?

  • What exactly are the metrics that can be monitored (# of requests for a given time interval, # of policy violations, # of routing failures, minimum, maximum, and average frontend response times, minimum, maximum, and average backend response times, and the percentage of service availability)?

  • What are the delivery mechanisms (file, database, or SNMP)?

  • Can the data be exported in a third-party monitoring system (Microsoft SCOM, VMware Hyperic, or Splunk)?

What kind of performance and health reports should be provided?

  • Daily, weekly, or monthly?

  • Aggregated by server, by application, by service, or by operation?

What kind of notification system should be provided?

  • What is the delivery mechanism (e-mail or SMS) used?

  • Is it integrated with a communication system such as Microsoft Exchange?

Are any dashboard and alerts required?

  • Does the real-time monitor (dashboard) require data aggregation?

  • What kind of metric thresholds should be configurable?

What are the backup and restore procedures?

  • What maintenance window (if any) is required for the backup?

  • Do the backup or restore procedures require integration with third-party tools?

What are the software upgrade procedures?

  • What maintenance window (if any) is required for the version upgrade?

  • How does the upgrade affect the disaster recovery environment?

  • What are the procedures of license changes? Do they require any maintenance window?

What are the certificate maintenance procedures?

  • How often are the certificates updated; every year, every three years, or never?

  • Does the certificate update require service interruption?