Book Image

Docker Certified Associate (DCA): Exam Guide

By : Francisco Javier Ramírez Urea
Book Image

Docker Certified Associate (DCA): Exam Guide

By: Francisco Javier Ramírez Urea

Overview of this book

Developers have changed their deployment artifacts from application binaries to container images, and they now need to build container-based applications as containers are part of their new development workflow. This Docker book is designed to help you learn about the management and administrative tasks of the Containers as a Service (CaaS) platform. The book starts by getting you up and running with the key concepts of containers and microservices. You'll then cover different orchestration strategies and environments, along with exploring the Docker Enterprise platform. As you advance, the book will show you how to deploy secure, production-ready, container-based applications in Docker Enterprise environments. Later, you'll delve into each Docker Enterprise component and learn all about CaaS management. Throughout the book, you'll encounter important exam-specific topics, along with sample questions and detailed answers that will help you prepare effectively for the exam. By the end of this Docker containers book, you'll have learned how to efficiently deploy and manage container-based environments in production, and you will have the skills and knowledge you need to pass the DCA exam.
Table of Contents (22 chapters)
1
Section 1 - Key Container Concepts
8
Section 2 - Container Orchestration
12
Section 3 - Docker Enterprise
17
Section 4 - Preparing for the Docker Certified Associate Exam

Understanding the evolution of applications

As we will probably read about on every IT medium, the concept of microservices is key in the development of new modern applications. Let's go back in time a little to see how applications have been developed over the years.

Monolithic applications are applications in which all components are combined into a single program that usually runs on a single platform. These applications were not designed with reusability in mind, nor any kind of modularity, for that matter. This means that every time a part of their code required an update, all the applications had to be involved in the process; for example, having to recompile all the application code in order for it to work. Of course, things were not so strict then.

Applications grew in number in terms of tasks and functionalities, with some of these tasks being distributed to other systems or even other smaller applications. However, the core components were kept immutable. We used this model of programming because running all application components together, on the same host, was better than trying to find some required information from other hosts. Network speed was insufficient in this regard, however. These applications were difficult to scale and difficult to upgrade. In fact, certain applications were locked to specific hardware and operating systems, which meant that developers needed to have the same hardware architectures at development stages to evolve applications.

We will discuss the infrastructure associated with these monolithic applications in the next section. The following diagram represents how the decoupling of tasks or functionalities has evolved from monolithic applications to Simple Object Access Protocol (SOAP) applications and the new paradigm of microservices:

In trying to achieve better application performance and decoupling components, we moved to three-tier architectures, based on a presentation tier, an application tier, and a data tier. This allowed different types of administrators and developers to be involved in application updates and upgrades. Each layer could be running on different hosts, but components only talked to one another inside the same application.

This model is still present in our data centers right now, separating frontends from application backends before reaching the database, where all the requisite data is stored. These components evolved to provide scalability, high availability, and management. On occasion, we had to include new middleware components to achieve these functionalities (thus adding to the final equation; for example, application servers, applications for distributed transactions, queueing, and load balancers). Updates and upgrades were easier, and we isolated components to focus our developers on those different application functionalities.

This model was extended and it got even better with the emergence of virtual machines in our data centers. We will cover how virtual machines have improved the application of this model in more detail in the next section.

As Linux systems have grown in popularity, the interaction between different components, and eventually different applications, has become a requirement. SOAP and other queueing message integration have helped applications and components exchange their information, and networking improvements in our data centers have allowed us to start distributing these elements in different nodes, or even locations.

Microservices are a step further to decoupling application components into smaller units. We usually define a microservice as a small unit of business functionality that we can develop and deploy standalone. With this definition, an application will be a compound of many microservices. Microservices are very light in terms of host resource usage, and this allows them to start and stop very quickly. Also, it allows us to move application health from a high availability concept to resilience, assuming that the process dies (this can be caused by problems or just a component code update) and we need to start a new one as quickly as possible to keep our main functionality healthy.

Microservices architecture comes with stateless in mind. This means that the microservice state should be managed outside of its own logic because we need to be able to run many replicas for our microservice (scale up or down) and run its content on all nodes of our environment, as required by our global load, for example. We decoupled the functionality from the infrastructure (we will see how far this concept of "run everywhere" can go in the next chapter).

Microservices provide the following features:

  • Managing an application in pieces allows us to substitute a component for a newer version or even a completely new functionality without losing application functionality.
  • Developers can focus on one particular application feature or functionality, and will just need to know how to interact with other, similar pieces.
  • Microservices interaction will usually be effected using standard HTTP/HTTPS API Representational State Transfer (REST) calls. The objective of RESTful systems is to increase the speed of performance, reliability, and the ability to scale.
  • Microservices are components that are prepared to have isolated life cycles. This means that one unhealthy component will not wholly affect application usage. We will provide resilience to each component, and an application will not have full outages.
  • Each microservice can be written in different programming languages, allowing us to choose the best one for maximum performance and portability.

Now that we have briefly reviewed the well-known application architectures that have developed over the years, let's take a look at the concept of modern applications.

A modern application has the following features:

  • The components will be based on microservices.
  • The application component's health will be based on resilience.
  • The component's states will be managed externally.
  • It will run everywhere.
  • It will be prepared for easy component updates.
  • Each application component will be able to run on its own but will provide a way to be consumed by other components.

Let's take a look.