Book Image

Architectural Patterns

By : Anupama Murali, Harihara Subramanian J, Pethuru Raj Chelliah
Book Image

Architectural Patterns

By: Anupama Murali, Harihara Subramanian J, Pethuru Raj Chelliah

Overview of this book

Enterprise Architecture (EA) is typically an aggregate of the business, application, data, and infrastructure architectures of any forward-looking enterprise. Due to constant changes and rising complexities in the business and technology landscapes, producing sophisticated architectures is on the rise. Architectural patterns are gaining a lot of attention these days. The book is divided in three modules. You'll learn about the patterns associated with object-oriented, component-based, client-server, and cloud architectures. The second module covers Enterprise Application Integration (EAI) patterns and how they are architected using various tools and patterns. You will come across patterns for Service-Oriented Architecture (SOA), Event-Driven Architecture (EDA), Resource-Oriented Architecture (ROA), big data analytics architecture, and Microservices Architecture (MSA). The final module talks about advanced topics such as Docker containers, high performance, and reliable application architectures. The key takeaways include understanding what architectures are, why they're used, and how and where architecture, design, and integration patterns are being leveraged to build better and bigger systems.
Table of Contents (13 chapters)

Software architecture patterns

This section is allocated for describing the prominent and dominant software architecture patterns.

There are several weaknesses associated with monolithic applications:

  • Scalability: Monolithic applications are designed to run on a single and powerful system within a process. Increasing the application's speed or capacity requires fork lifting onto newer and faster hardware, which takes significant planning and consideration.
  • Reliability and availability: Any kind of faults or bugs within a monolithic application can take the entire application offline. Additionally, updating the application typically requires downtime in order to restart services.
  • Agility: Monolithic code bases become increasingly complex as features are being continuously added, and release cycles are usually measured in periods of 6-12 months or more.

As already mentioned, legacy applications are monolithic in nature and massive in size. Refactoring and remedying them to be web, cloud, and service-enabled is bound to consume a lot of time, money, and talent. As enterprises are consistently pruning the IT budget and still expecting more with less from IT teams, the time for leveraging various architectural patterns individually or collectively to prepare and put modernized applications has arrived. The following sections detail the various promising and potential architecture patterns.

Object-oriented architecture (OOA)

Objects are the fundamental and foundational building blocks for all kinds of software applications. The structure and behavior of any software application can be represented through the use of multiple and interoperable objects. Objects elegantly encapsulate the various properties and the tasks in an optimized and organized manner. Objects connect, communicate, and collaborate through well-defined interfaces. Therefore, the object-oriented architectural style has become the dominant one for producing object-oriented software applications. Ultimately, a software system is viewed as a dynamic collection of cooperating objects, instead of a set of routines or procedural instructions.

We know that there are proven object-oriented programming methods and enabling languages, such as C++, Java, and so on. The properties of inheritance, polymorphism, encapsulation, and composition being provided by OOA come in handy in producing highly modular (highly cohesive and loosely coupled), usable and reusable software applications.

The object-oriented style is suitable if we want to encapsulate logic and data together in reusable components. Also, the complex business logic that requires abstraction and dynamic behavior can effectively use this OOA.

Component-based assembly (CBD) architecture

Monolithic and massive applications can be partitioned into multiple interactive and smaller components. When components are found, bound, and composed, we get the full-fledged software applications. Components emerge as the building-block for designing and developing enterprise-scale applications. Thus, the aspects of decomposition of complicated applications and the composition of components to arrive at competent applications receive a lot of traction. Components expose well-defined interfaces for other components to find and communicate. This setup provides a higher level of abstraction than the object-oriented design principles. CBA does not focus on issues such as communication protocols and shared states. Components are reusable, replaceable, substitutable, extensible, independent, and so on. Design patterns such as the dependency injection (DI) pattern or the service locator pattern can be used to manage dependencies between components and promote loose coupling and reuse. Such patterns are often used to build composite applications that combine and reuse components across multiple applications.

Aspect-oriented programming (AOP) aspects are another popular application building block. By deft maneuvering of this unit of development, different applications can be built and deployed. The AOP style aims to increase modularity by allowing the separation of cross-cutting concerns. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. Aspect-oriented programming entails breaking down program logic into distinct parts (concerns, the cohesive areas of functionality). All programming paradigms intrinsically support some level of grouping and encapsulation of concerns into independent entities by providing abstractions (for example, functions, procedures, modules, classes, methods, and so on). These abstractions can be used for implementing, abstracting, and composing various concerns. Some concerns anyway cut across multiple abstractions in a program and defy these forms of implementation. These concerns are called cross-cutting concerns or horizontal concerns.

Logging exemplifies a cross-cutting concern because a logging strategy necessarily affects every logged part of the system. Logging thereby cross-cuts all logged classes and methods. In short, aspects are being represented as cross-cutting concerns and they are injected on a need basis. Through the separation of concerns, the source code complexity comes down sharply and the coding efficiency is bound to escalate.

Agent-oriented software engineering (AOSE) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to the proven object-oriented programming, which has objects (providing methods with variable parameters) at its core, agent-oriented programming has externally specified agents with interfaces and messaging capabilities at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving agents, in a way specific to its class of agents.

A software agent is a persistent, goal-oriented computer program that reacts to its environment and runs without continuous direct supervision to perform some function for an end user or another program. A software agent is the computer analog of an autonomous robot. There are a set of specific applications and industry verticals that require the unique services of software agents. Thus, we have software objects, components, aspects, and agents as the popular software construct for building a bevy of differently abled applications.

Domain-driven design (DDD) architecture

Domain-driven design is an object-oriented approach to designing software based on the business domain, its elements and behaviors, and the relationships between them. It aims to enable software systems that are a correct realization of the underlying business domain by defining a domain model expressed in the language of business domain experts. The domain model can be viewed as a framework from which solutions can then be readied and rationalized.

Architects have to have a good understanding of the business domain to model. The development team has too often worked with business domain experts to model the domain in a precise and perfect manner. In this, the team agrees to only use a single language that is focused on the business domain, by excluding any technical jargon. As the core of the software is the domain model, which is a direct projection of this shared language, it allows the team to quickly find gaps in the software by analyzing the language around it. The DDD process holds the goal not only of implementing the language being used, but also improving and refining the language of the domain. This, in turn, benefits the software being built.

DDD is good if we have a complex domain and we wish to improve communication and understanding within the development team. DDD can also be an ideal approach if we have large and complex enterprise data scenarios that are difficult to manage using the existing techniques.

Client/server architecture

This pattern segregates the system into two main applications, where the client makes requests to the server. In many cases, the server is a database with application logic represented as stored procedures. This pattern helps to design distributed systems that involve a client system and a server system and a connecting network. The simplest form of client/server architecture involves a server application that is accessed directly by multiple clients. This is referred to as a two-tier architecture application. Web and application servers play the server role in order to receive client requests, process them, and send the responses back to the clients. The following figure is the pictorial representation of the client/server pattern:

The peer-to-peer (P2P) applications pattern allows the client and server to swap their roles in order to distribute and synchronize files and information across multiple clients. Every participating system can play the client as well as the server role. They are just peers working towards the fulfillment of business functionality. It extends the client/server style through multiple responses to requests, shared data, resource discovery, and resilience to the removal of peers.

The main benefits of the client/server architecture pattern are:

  • Higher security: All data gets stored on the server, which generally offers a greater control of security than client machines.
  • Centralized data access: Because data is stored only on the server, access and updates to the data are far easier to administer than in other architectural styles.
  • Ease of maintenance: The server system can be a single machine or a cluster of multiple machines. The server application and the database can be made to run on a single machine or replicated across multiple machines to ensure easy scalability and high availability. The multiple machines eventually form a cluster through appropriate networking. Lately, the enterprise-grade server application is made up of multiple subsystems and each subsystem/microservice can be run on the separate server machine in the cluster. Another trend is each subsystem and its instances are also being hosted and run on multiple machines. This sort of single or multiple server machines being leveraged for executing server applications and databases ensures that a client remains unaware and unaffected by a server repair, upgrade, or relocation.

However, the traditional two-tier client/server architecture pattern has numerous disadvantages. Firstly, the tendency of keeping both application and data in a server can negatively impact system extensibility and scalability. The server can be a single point of failure. The reliability is the main worry here. To address these issues, the client-server architecture has evolved into the more general three-tier (or N-tier) architecture. This multi-tier architecture not only surmounts the issues just mentioned, but also brings forth a set of new benefits.

Multi-tier distributed computing architecture

The two-tier architecture is neither flexible nor extensible. Hence, multi-tier distributed computing architecture has attracted a lot of attention. The application components can be deployed in multiple machines (these can be co-located and geographically distributed). Application components can be integrated through messages or remote procedure calls (RPCs), remote method invocations (RMIs), common object request broker architecture (CORBA), enterprise Java beans (EJBs), and so on. The distributed deployment of application services ensures high availability, scalability, manageability, and so on. Web, cloud, mobile, and other customer-facing applications are deployed using this architecture.

Thus, based on the business requirements and the application complexity, IT teams can choose the simple two-tier client/server architecture or the advanced N-tier distributed architecture to deploy their applications. These patterns are for simplifying the deployment and delivery of software applications to their subscribers and users.

Layered/tiered architecture

This pattern is an improvement over the client/server architecture pattern. This is the most commonly used architectural pattern. Typically, an enterprise software application comprises three or more layers: presentation / user interface layer, business logic layer, and data persistence layer. Additional layers for enabling integration with third-party applications/services can be readily inscribed in this layered architecture. There are primarily database management systems at the backend, the middle tier involves an application and web server, and the presentation layer is primarily user interface applications (thick clients) or web browsers (thin clients). With the fast proliferation of mobile devices, mobile browsers are also being attached to the presentation layer. Such tiered segregation comes in handy in managing and maintaining each layer accordingly. The power of plug-in and play gets realized with this approach. Additional layers can be fit in as needed. There are model view controller (MVC) pattern-compliant frameworks hugely simplifying enterprise-grade and web-scale applications. MVC is a web application architecture pattern. The main advantage of the layered architecture is the separation of concerns. That is, each layer can focus solely on its role and responsibility. The layered and tiered pattern makes the application:

  • Maintainable
  • Testable
  • Easy to assign specific and separate roles
  • Easy to update and enhance layers separately

This architecture pattern is good for developing web-scale, production-grade, and cloud-hosted applications quickly and in a risk-free fashion. The current and legacy-tiered applications can be easily modified at each layer with newer technologies and tools. This pattern remarkably moderates and minimizes the development, operational, and management complexities of software applications. The partitioning of different components participating in the system can be replaced and substituted by other right components. When there are business and technology changes, this layered architecture comes in handy in embedding newer things in order to meet varying business requirements.

As illustrated in the following figure, there can be multiple layers fulfilling various needs. Some layers can be termed as open in order to be bypassed during some specific requests. In the figure, the services layer is marked as open. That is, requests are allowed to bypass this opened layer and go directly to the layer under it. The business layer is now allowed to go directly to the persistence layer. Thus, the layered approach is highly open and flexible.

In short, the layered or tiered approach is bound to moderate the rising complexity of software applications. Also, bypassing certain layers, the flexibility is being incorporated easily. Additional layers can be embedded as needed in order to bring highly synchronized applications.

Event-driven architecture (EDA)

Generally, server applications respond to clients requests. That is, the request and reply method is the main one for interactions between clients and servers as per the famous client-server architectural style. This is kind of pulling information from servers. The communication is also synchronous. In this case, both clients and servers have to be available online in order to initiate and accomplish the tasks. Further on, when service requests are being processed and performed by server machines, the requesting services/clients have to wait to receive the intended response from servers. That means clients cannot do any other work while waiting to receive servers' responses.

The world is eventually becoming event-driven. That is, applications have to be sensitive and responsive proactively, pre-emptively, and precisely. Whenever there is an event happening, applications have to receive the event information and plunge into the necessary activities immediately. The request and reply notion paves the way for the fire and forgets tenet. The communication becomes asynchronous. There is no need for the participating applications to be available online all the time.

An event is a noteworthy thing that happens inside or outside of any business. An event may signify a problem, an opportunity, a deviation, state change, or a threshold break-in. Every event occurrence has an event header and an event body. The event header contains elements describing the event occurrence details, such as specification ID, event type, name, creator, timestamp, and so on. The event body concisely yet unambiguously describes what happened. The event body has to have all the right and relevant information so that any interested party can use that information to take necessary action in time. If the event is not fully described, then the interested party has to go back to the source system to extract the value-adding information.

EDA is typically based on an asynchronous message-driven communication model to propagate information throughout an enterprise. It supports a more natural alignment with an organization's operational model by describing business activities as series of events. EDA does not bind functionally disparate systems and teams into the same centralized management model. EDA ultimately leads to highly decoupled systems. The common issues being introduced by system dependencies are getting eliminated through the adoption of the proven and potential EDA.

We have seen various forms of events used in different areas. There are business and technical events. Systems update their status and condition emitting events to be captured and subjected to a variety of investigations in order to precisely understand the prevailing situations. The submission of web forms and clicking on some hypertexts generate events to be captured. Incremental database synchronization mechanisms, RFID readings, email messages, short message service (SMS), instant messaging, and so on are events not to be taken lightly. There can be coarse-grained and fine-grained events. Typically, a coarse-grained event is composed of multiple fine-grained events. That is, a coarse-grained event gets abstracted into business concepts and activities. For example, a new customer registration has occurred on the external website, an order has completed the checkout process, a loan application is approved in underwriting, a market trade transaction is completed, a fulfillment request is submitted to a supplier, and so on. On the other hand, fine-grained events such as infrastructure faults, application exceptions, system capacity changes, and change deployments are still important. But their scope is local and limited.

There are event processing engines, message-oriented middleware (MoM) solutions such as message queues and brokers to collect and stock event data and messages. Millions of events can be collected, parsed, and delivered through multiple topics through these MoM solutions. As event sources/producers publish notifications, event receivers can choose to listen to or filter out specific events and make proactive decisions in real-time on what to do next.

EDA style is built on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution. In an EDA environment, information can be propagated to all the services and applications in real-time. The EDA pattern enables highly reactive enterprise applications. Real-time analytics is the new normal with the surging popularity of the EDA pattern.

Anuradha Wickramarachchi in his blog writes that this is the most common distributed asynchronous architecture. This architecture is capable of producing highly scalable systems. The architecture consists of single-purpose event processing components that listen to events and process them asynchronously. There are two main topologies in the event-driven architecture:

  • Mediator topology: The mediator topology has a single event queue and a mediator which directs each of the events to relevant event processors. Usually, events are fed into the event processors passing through an event channel to filter or pre-process events. The implementation of the event queue could be in the form of a simple message queue or through a message passing interface leveraging a large distributed system, which intrinsically involves complex messaging protocols. The following diagram demonstrates the architectural implementation of the mediator topology:
  • Broker topology: This topology involves no event queue. Event processors are responsible for obtaining events, processing and publishing another event indicating the end. As the name of the topology implies, event processors act as brokers to chain events. Once an event is processed by a processor, another event is published so that another processor can proceed.

As the diagram indicates, some event processors just process and leave no trace and some tend to publish new events. The steps of certain tasks are chained in the manner of callbacks. That is, when one task ends, the callback is triggered, and all the tasks remain asynchronous in nature.

The prominent examples include programming a web page with JavaScript. This application involves writing the small modules that react to events like mouse clicks or keystrokes. The browser itself orchestrates all of the inputs and makes sure that only the right code sees the right events. This is very different from the layered architecture where all data will typically pass through all layers.

The major issues with EDA

The EDA pattern lacks the atomicity of transactions since there is no execution sequence of the events. This is because event processors are being implemented to be highly distributed, decoupled, and asynchronous. The results are also expected to be provided at a future time mostly through callbacks. Testing of the systems with event-driven architecture is not easy due to the asynchronous nature of the processing. Finally, since the tasks are asynchronous and non-blocking, the executions happen in parallel, guaranteeing higher performance. This setup outweighs the cost of queueing mechanisms.

Business enterprises are being bombarded with a large number of simple as well as complex events every day, and the enterprise and cloud IT teams have to have the appropriate event capture and processing engines in place to take corrective actions and to give a pertinent answer in real-time. The well-known examples include all kinds of real-time and real-world IT systems, such as trade settlement systems, flight reservation systems, real-time vehicle location data for transportation and logistics companies, streaming stock data for financial services companies, and so on. Companies empower these systems to comfortably handle large volumes of complex data in real time.

Service-oriented architecture (SOA)

We have been fiddling with object-oriented, component-based, aspect-oriented, and agent-based software development processes. However, with the arrival of service paradigms, software packages and libraries are being developed as a collection of services. That is, software systems and their subsystems are increasingly expressed and exposed as services. Services are capable of running independently of the underlying technology. Also, services can be implemented using any programming and script languages.

Services are self-defined, autonomous, and interoperable, publicly discoverable, assessable, accessible, reusable, and compostable. Services interact with one another through messaging. There are service providers/developers and consumers/clients. There are service discovery services that innately leverage both private and public service registries and repositories. Client services can find their serving services dynamically through service discovery services.

Every service has two parts: the interface and the implementation. The interface is the single point of contact for requesting services. Interfaces give the required separation between services. All kinds of deficiencies and differences of service implementation get hidden by the service interface. To make the service interface easy to use by other services, it is a good idea to use a schema definition that defines the structure of the messages. When a service is used by multiple other services, formalizing the service with a contract is paramount. A contract bounds the service with schemas, a clear message exchange pattern, and policies. Policies define the QoS attributes, such as scalability, sustainability, security and so on. SOA differs from the client/server architecture in the sense that services are universally available and stateless, while client/server architecture requires tight coupling among the participants.

Precisely speaking, SOA enables application functionality to be provided as a set of services, and the creation of personal as well as professional applications that make use of software services.

Service-inspired integration (SOI)

Services can integrate disparate and distributed applications and data sources. The Enterprise service bus (ESB) is the service middleware enabling service-based integration of multiple assets and resources. The ESB facilitates service interconnectivity, routing, remediation, enrichment, governance, and so on. The ESB is the integration middleware for any service environment, where the message is the basic unit of interaction between services. An ESB is lightweight compared with previous middleware solutions, such as the EAI hub. The ESB is lightweight because it obviates the need of using custom-made connectors, drivers, and adapters for integrating processes/applications, data sources, and UIs.

Let us consider a sample scenario. Application A is only capable of exporting files to a particular directory and application B would like to get some information out of an exported file in a SOAP message over HTTP. The ESB can implement a message flow that is triggered by a SOAP request message from application B and read the requested information of the exported file of application A with a file adapter.

The ESB gathers the requested information and transforms it into a SOAP message corresponding to an agreed upon XML schema. Then the ESB sends the SOAP message back to application B over HTTP.

The message flow is an important ingredient of any ESB solution. A message flow is a definition that describes where the message originates from, how it arrives at the ESB, and then how it lands at the target service/application. Matching is another prominent functionality provided by the ESB. This function prescribes which message flow must be executed when a message arrives in the ESB.

There are other key functionalities, routing, translation, and transformation of the message format. The routing is all about routing messages from one service to another service. Routing is often used by a message flow module to describe what service will be called for a particular incoming message. The second core functionality is the protocol translation. There are many application and message transmission protocols. An ESB can translate the requester protocol into the provider-compatible protocol. Suppose the requester supports the HTTP protocol and the provider/receiver supports the FTP protocol. Then, this functionality of ESB translates the HTTP protocol to the FTP protocol to enable different and distributed applications to find, bind, and interact. The following figure is the macro-level SOA:

The last core function of the ESB is the message/data format transformation. When a requestor sends a message in SOAP format, the provider can be called by the ESB with an EDIFACT message format. The technology behind such message-format transformations can be the proven XML stylesheet language transformation (XSLT).

SOA is essentially a dynamic collection of services which communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. SOA is based on a conventional request-response mechanism. A service consumer invokes a service provider through the network and has to wait until the completion of the operation on the provider's side. Replies are sent back to the consumer in a synchronous way.

In conclusion, heterogeneous applications are deployed in an enterprise and cloud IT environments to automate business operations, offerings, and outputs. Legacy applications are service-enabled by attaching one or more interfaces. By putting the ESB in the center, service-enabled applications are easily getting integrated to connect, communicate, collaborate, corroborate and correlate to produce the desired results. In short, SOA is for service-enablement and service-based integration of monolithic and massive applications. The complexity of enterprise process/application integration gets moderated through the smart leverage of the service paradigm. The ESB is the most optimal middleware solution for enabling disparate and distributed applications to talk with one another in a risk-free fashion.

Event-driven service-oriented architecture

Today, most of the SOA efforts are keen on implementing synchronous request-response interaction patterns to connect different and distributed processes. This approach works well for highly centralized environments and creates a kind of loose coupling for distributed software components at the IT infrastructure level. However, SOA leads to the tight coupling of application functions due to the synchronous communication. This being said, increasingly enterprise environments are tending towards being dynamic and real-time in their interactions, decision-enablement, and actuation. The SOA patterns may find it difficult in ensuring the overwhelmingly pronounced requirements of next-generation enterprise IT.

SOA is a good option if the requirement is just to send requests and receive responses synchronously. But SOA is not good enough to handle real-time events asynchronously. That is why the new pattern of event-driven SOA, which intrinsically combines the proven SOA's request-response and the EDA's event publish-subscribe paradigms, is acquiring a lot of attention and attraction these days. That is, in order to fulfil the newly incorporated requirements, there is a need for such a composite pattern. This is being touted as the new-generation SOA (alternatively touted as SOA 2.0). It is based on the asynchronous message-driven communication model to propagate information across all sorts of enterprise-grade applications throughout an enterprise. Services are activated by differently sourced events and the resulting event messages pass through the right services to accomplish the predestined business operation. Precisely speaking, the participating and contributing services are fully decoupled and joined through event messages. All kinds of dependencies get simply eliminated in this new model.

Applications are being designed, developed, and deployed in such a way to be extremely yet elegantly sensitive and responsive. With enterprise applications and big data mandating the distributed computing model, undoubtedly the event-driven SOA pattern is the way forward. The goals of dynamism, autonomy, and real-time interactions can be achieved through this new pattern. This new event-driven SOA pattern allows system architects and designers to process both event messages and service requests (RPC/RMI). This enables a closer affinity and association between business needs and the respective IT solutions. This invariably results in business agility, adaptivity, autonomy, and affordability.

The following diagram illustrates the traditional request-and-response SOA style. The SOA pattern generally prescribes the synchronous and pull-based approach:

The following diagram depicts the message-oriented, event-driven, asynchronous, and non-blocking process architecture: