Book Image

Hands-On Software Architecture with Java

By : Giuseppe Bonocore
5 (1)
Book Image

Hands-On Software Architecture with Java

5 (1)
By: Giuseppe Bonocore

Overview of this book

Well-written software architecture is the core of an efficient and scalable enterprise application. Java, the most widespread technology in current enterprises, provides complete toolkits to support the implementation of a well-designed architecture. This book starts with the fundamentals of architecture and takes you through the basic components of application architecture. You'll cover the different types of software architectural patterns and application integration patterns and learn about their most widespread implementation in Java. You'll then explore cloud-native architectures and best practices for enhancing existing applications to better suit a cloud-enabled world. Later, the book highlights some cross-cutting concerns and the importance of monitoring and tracing for planning the evolution of the software, foreseeing predictable maintenance, and troubleshooting. The book concludes with an analysis of the current status of software architectures in Java programming and offers insights into transforming your architecture to reduce technical debt. By the end of this software architecture book, you'll have acquired some of the most valuable and in-demand software architect skills to progress in your career.
Table of Contents (20 chapters)
1
Section 1: Fundamentals of Software Architectures
7
Section 2: Software Architecture Patterns
14
Section 3: Architectural Context

The changing role of Java in cloud-native applications

Now that we’ve briefly touched on the various kinds of designs and diagrams of an application, let’s focus on the other fundamental topic of this book: the Java language.

It’s not uncommon to hear that Java is dead. However, if you are reading this book, you probably agree that this is far from the truth.

Of course, the panorama of software development languages for enterprise applications is now wider and more complicated than the golden age of Java; nevertheless, the language is still alive and widespread, especially in some areas.

In this section, we will explore the usage of Java technology in the enterprise software landscape. Then, we will take a quick glance at the history of Java Enterprise Edition (JEE). This will be a good foundation to understand existing enterprise architectures and model modern, cloud-native applications based on this technology.

Now, let’s examine why Java technology is still thriving.

Why Java technology is still relevant today

The most important reason for Java’s popularity is probably the availability of skill. There are plenty of experts on this language, as many polls and studies show (for example, PYPL and Tiobe). Another crucial point is the relevance of the ecosystem, in terms of the quantity and quality of libraries, resources, and tooling available for the Java platform.

Rewriting complex applications (including their dependencies) from Java to another language could probably take years, and, long story short, there might be no reason to do that. Java just works, and it’s an incredibly productive platform. It might be slow and resource-intensive in some scenarios, but this is balanced by its stability. The language has been battle-tested, is feature-rich, and essentially, covers all the use cases required in an enterprise, such as transactionality, integration with legacy environments, and manageability.

Now, let’s take a look at where and how Java technology is used in enterprise environments. This can be very useful to understand existing scenarios and fit new applications into existing application landscapes.

Java usage in enterprise environments

In order to fit our Java application in the overall architecture, it’s important to understand the typical context of a large enterprise, from a software architecture perspective.

Of course, the enterprise architecture depends a lot on the industry domain (for instance, banking, telecommunications, media, and more), geography, and the tenure of the organization, so my vision might be slightly biased toward the segment I have worked with for the longest (a large enterprise in the EMEA area). Still, I think we can summarize it as follows:

  • Legacy: Big applications, usually running very core functions of the enterprise for many years (at least more than 10 and commonly more than 20). Needless to say, the technology here is not the most current (Cobol is widespread in this area, but it is not uncommon to see other things such as PL SQL, huge batch scripts, and even C/C++ code). However, the language is seldom an issue here. Of course, nowadays, those skills are very rare to find on the job market, but usually, the software just works. The point here is that most of the time, nobody exactly knows what the software does, as it’s poorly documented and tested. Moreover, you usually don’t have automated release procedures, so every time you perform a bugfix, you have to cross your fingers. Needless to say, a proper testing environment has never been utilized, so most of the things have to be tested in production.
  • Web (and mobile): This is another big chunk of the enterprise architecture. Usually, it is easier to govern than legacy but still very critical. Indeed, by design, these applications are heavily customer-facing, so you can’t afford downtime or critical bugs. In terms of technologies, the situation here is more fragmented. Newer deployments are almost exclusively made of Single-Page Applications (SPAs) based on JavaScript (implemented with frameworks such as Angular, Vue, and React). Backends are REST services implemented in JavaScript (Node.js) or Java.
  • Business applications: Often, the gap between web applications and business applications is very thin. Here, the rule of thumb is that business applications are less web-centric (even if they often have a web GUI), and usually, they are not customer exposed. The most common kind of business application is the management of internal back-office processes. It’s hard to find a recurrent pattern in business applications since it’s an area that contains very different things (such as CRMs, HR applications, branch office management, and more).
  • BigData: Under various names and nuances (such as data warehouses, data lakes, and AI), BigData is commonly a very huge workload in terms of the resources required. Here, the technologies are often packaged software, while custom development is done using various languages, depending on the core engine chosen. The most common languages in this area are Java (Scala), R (which is decreasing in popularity), and Python (which is increasing in popularity). In some implementations, a big chunk of SQL is used to stitch calculations together.
  • Middlewares and infrastructure: Here falls everything that glues the other apps together. The most common pattern here is the integration (synchronous or asynchronous). The keywords are ESB, SOA, and messaging. Other things such as Single Sign-On and identity providers can be included here.

As I mentioned, this is just a coarse-grained classification, useful as reference points regarding where our application will fit and which other actor our application will be interacting with.

Notice that the technologies mentioned are mostly traditional ones. With the emergence of modern paradigms (such as the cloud, microservices, and serverless), new languages and stacks are quickly gaining their place. Notable examples are Go in the microservice development area and Rust for system programming.

However, those technologies and approaches are often just evolutions (or brand-new applications) belonging to the same categories. Here, the most interesting exception is in the middleware area, where some approaches are decreasing in popularity (for example, SOA) in favor of lighter alternatives. We will discuss this in Chapter 7, Exploring Middleware and Frameworks.

Now that we’ve explored the widespread usage of Java in an enterprise context, let’s take a look at its recent history.

JEE evolution and criticism

JEE, as we have learned, is still central in common enterprise applications. The heritage of this language is just great. The effort that has been done in terms of standardizing a set of APIs for common features (such as transactionality, web services, and persistence) is just amazing, and the cooperation between different vendors, to provide interoperability and reference implementation, has been a very successful one.

However, in the last couple of years, a different set of needs has emerged. The issue with JEE is that in order to preserve long-term stability and cross-vendor compatibility, the evolution of the technology is not very quick. With the emergence of cloud and more modular applications, features such as observability, modular packaging, and access to no SQL databases have become essential for modern applications. Of course, standards and committees have also had their moments, with developers starting to move away from vanilla implementations and using third-party libraries and non-standard approaches.

Important Note:

The objective of this book is not to recap the history and controversy of the JEE platform. However, organizational issues (culminating with the donation of the project to the Eclipse Foundation) and less frequent releases have contributed to the decrease in popularity of the platform.

The upcoming of the Platform-as-a-Service (PaaS) paradigm is another important event that is changing the landscape. Modern orchestration platforms (with Kubernetes as the most famous example), both in the cloud or on-premises, are moving toward a different approach. We will examine this in greater detail later, but essentially, the core concept is that for the sake of scalability and control, some of the typical features of the application server (for example, clustering and the service registry) are delegated to the platform itself. This has a strict liaison with the microservice approach and the benefits they bring. In the JEE world, this means that those features become duplicated.

Another point is about containerization. One of the focal points of container technology is immutability and its impacts in terms of stability and the quality of the applications. You package one application into a container and easily move it between different environments. Of course, this is, not in the same direction as JEE servers, which have been engineered to host multiple applications, managing hot deploys and live changes of configurations.

A further consideration regarding application servers is that they are, by design, optimized for transaction throughput (often at the expense of startup times), and their runtime is general-purpose (including libraries covering many different use cases). Conversely, the cloud-native approach is usually aimed at a faster startup time and a runtime that is as small as possible, bringing only the features needed by that particular application. This will be the focus of our next section.

Introducing cloud-native Java

Since the inception of the microservices concept, in the Java development community, the paradigm has increasingly shifted toward the fat jar approach. This concept is nothing new, as the first examples of uber jars (a synonym of the fat jar) have been around since the early 2000s, mainly in the desktop development area. The idea around them is pretty simple: instead of using dynamic loading of libraries at runtime, let’s package them all together into an executable jar to simplify the distribution of our application. This is actually the opposite of the model of the application servers, which aim to create an environment as configurable as possible, supporting things such as hot deployment and the hot-swapping of libraries, privileging the uptime to immutability (and predictability).

In container-based and cloud-native applications, fat jar approaches have begun to be viewed as the perfect candidate for the implementation of cloud-native, microservices-oriented applications. This is for many different reasons:

  • Testability: You can easily run and test the application in a local environment (it’s enough to have a compatible Java Virtual Machine or JVM). Moreover, if the interface is properly defined and documented, it’s easy to mock other components and simulate integration testing.
  • Ease of installation: The handover of the application to ops groups (or to testers) is pretty easy. Usually, it’s enough to have the .jar file and configuration (normally, on a text file or environment variable).
  • Stability across environments: Since everything is self-contained, it’s easy to avoid the works-on-my-machine effect. The development execution environment (usually, the developer machine) is designed pretty similarly to the production environment (aside from the configuration, which is usually well separated from the code, and of course, the external systems such as the databases). This behavior mirrors what is provided by containers, and it’s probably one of the most important reasons for the adoption of this approach in the development of microservices.

There is one last important consideration to pay attention to: curiously enough, the all-in-one fat jar approach, in contrast with what I’ve just said, is theoretically conflicting with the optimization provided by the containerization.

Indeed, one of the benefits provided by every container technology is layerization. Put simply, every container is composed by starting with a base image and just adding what’s needed. A pretty common scenario in the Java world is to create the application as a tower composed of the operating system plus the JVM plus dependencies plus the application artifact. Let’s take a glance at what this looks like in the following diagram. In gray, you will see the base image, which doesn’t change with a new release of the application. Indeed, a change to the application artifact means only redeploying the last layer on top of the underlying Base Image:

Figure 1.5 – Layering container images

Figure 1.5 – Layering container images

As you can see in the preceding diagram, the release in this scenario is as light as simply replacing the Application Artifact layer (that is, the top layer).

By using the fat jar approach, you cannot implement this behavior. If you change something in your application but nothing in the dependencies, you have to rebuild the whole Fat JAR and put it on top of the JVM layer. You can observe what this look like in the following diagram:

Figure 1.6 – Layering container images and fat jars

Figure 1.6 – Layering container images and fat jars

In this scenario, the release includes all of the application dependencies, other than the application by itself.

While this might appear to be a trivial issue, it could mean hundreds of megabytes copied back and forth into your environment, impacting the development and release time since most of the things composing the container cannot be cached by the container runtime.

Some ecosystems do a bit of experimentation in the field of hollow jars to essentially replicate an approach similar to the application server. Here, the composed (fat) jar is split between the application layer and the dependencies layer in order to avoid having to repackage/move everything each time. However, this approach is far from being widespread.

The Java microservices ecosystem

One last consideration goes to the ecosystem in the Java microservices world. As we were beginning to mention earlier, the approach here is to delegate more things to the platform. The service itself becomes simpler, having only the dependency that is required (to reduce the size and the resource footprint) and focusing only on the business logic.

However, some of the features delegated to the application server are still required. The service registry, clustering, and configuration are the simplest examples that come to mind.

Additionally, other, newer needs start to emerge:

  • HealthCheck is the first need. Since there is no application server to ensure your application is up and running, and the application is implemented as more than one running artifact, you will end up having to monitor every single microservice and possibly restarting it (or doing something different) if it becomes unhealthy.
  • Visibility is another need. I might want to visualize the network of connections and dependencies, the traffic flowing between components, and more.
  • Last but not least: resiliency. This is often translated as the circuit breaker even if it’s not the only pattern to help with that. If something in the chain of calls fails, you don’t want the failure to cascade.

So, as we will discover in the upcoming chapters, a new ecosystem will be needed to survive outside the JEE world.

Microservices has been a groundbreaking innovation in the world of software architectures, and it has started a whole new trend in the world of so-called cloud-native architectures (which is the main topic of this book). With this in mind, I cannot avoid mentioning another very promising paradigm: Serverless.

Serverless borrows some concepts from microservices, such as standardization and horizontal scaling, and takes it to the extreme, by relieving the developer of any responsibility outside the code itself and delegating aspects such as packaging and deployment to an underlying platform. Serverless, as a trend, has become popular as a proprietary technology on cloud platforms, but it is increasingly used in hybrid cloud scenarios.

Java is not famous in the serverless world. The need for compilation and the weight added by the JVM has, traditionally, been seen as a showstopper in the serverless world. However, as we will explore further in Chapter 9, Designing Cloud-Native Architectures, Java technology is now also gaining some momentum in that area.

And now, in order to better clarify different architectural designs, we will examine some examples based on a reference case study.