Book Image

Docker Certified Associate (DCA): Exam Guide

By : Francisco Javier Ramírez Urea
Book Image

Docker Certified Associate (DCA): Exam Guide

By: Francisco Javier Ramírez Urea

Overview of this book

Developers have changed their deployment artifacts from application binaries to container images, and they now need to build container-based applications as containers are part of their new development workflow. This Docker book is designed to help you learn about the management and administrative tasks of the Containers as a Service (CaaS) platform. The book starts by getting you up and running with the key concepts of containers and microservices. You'll then cover different orchestration strategies and environments, along with exploring the Docker Enterprise platform. As you advance, the book will show you how to deploy secure, production-ready, container-based applications in Docker Enterprise environments. Later, you'll delve into each Docker Enterprise component and learn all about CaaS management. Throughout the book, you'll encounter important exam-specific topics, along with sample questions and detailed answers that will help you prepare effectively for the exam. By the end of this Docker containers book, you'll have learned how to efficiently deploy and manage container-based environments in production, and you will have the skills and knowledge you need to pass the DCA exam.
Table of Contents (22 chapters)
1
Section 1 - Key Container Concepts
8
Section 2 - Container Orchestration
12
Section 3 - Docker Enterprise
17
Section 4 - Preparing for the Docker Certified Associate Exam

Infrastructures

For every described application model that developers are using for their applications, we need to provide some aligned infrastructure architecture.

On monolithic applications, as we have seen, all application functionalities run together. In some cases, applications were built for a specific architecture, operating system, libraries, binary versions, and so on. This means that we need at least one hardware node for production and the same node architecture, and eventually resources, for development. If we add the previous environments to this equation, such as certification or preproduction for performance testing, for example, the number of nodes for each application would be very important in terms of physical space, resources, and money spent on an application.

For each application release, developers usually need to have a full production-like environment, meaning that only configurations will be different between environments. This is hard because when any operating system component or feature gets updated, changes must be replicated on all application environments. There are many tools to help us with these tasks, but it is not easy, and the cost of having almost-replicated environments is something to look at. And, on the other hand, node provision could take months because, in many cases, a new application release would mean having to buy new hardware.

Third-tier applications would usually be deployed on old infrastructures using application servers to allow application administrators to scale up components whenever possible and prioritize some components over others.

With virtual machines in our data centers, we were able to distribute host hardware resources between virtual nodes. This was a revolution in terms of node provision time and the costs of maintenance and licensing. Virtual machines worked very well on monolithic and third-tier applications, but application performance depends on the host shared resources that are applied to the virtual node. Deploying application components on different virtual nodes was a common use case because it allowed us to run these virtually everywhere. On the other hand, we were still dependent on operating system resources and releases, so building a new release was dependent on the operating system.

From a developer's perspective, having different environments for building components, testing them side by side, and certificating applications became very easy. However, these new infrastructure components needed new administrators and efforts to provide nodes for development and deployment. In fast-growing enterprises with many changes in their applications, this model helps significantly in providing tools and environments to developers. However, agility problems persist when new applications have to be created weekly or if we need to accomplish many releases/fixes per day. New provisioning tools such as Ansible or Puppet allowed virtualization administrators to provide these nodes faster than ever, but as infrastructures grew, management became complicated.

Local data centers were rendered obsolete and although it took time, infrastructure teams started to use computer cloud providers. They started with a couple of services, such as Infrastructure as a Service (IaaS), that allowed us to deploy virtual nodes on the cloud as if they were on our data center. With new networking speeds and reliability, it was easy to start deploying our applications everywhere, data centers started to get smaller, and applications began to run on distributed environments on different cloud providers. For easy automation, cloud providers prepared their infrastructure's API for us, allowing users to deploy virtual machines in minutes.

However, as many virtualization options appeared, other options based on Linux kernel features and its isolation models came into being, reclaiming some old projects from the past, such as chroot and jail environments (quite common on Berkeley Software Distribution (BSD) operating systems) or Solaris zones.

The concept of process containers is not new; in fact, it is more than 10 years old. Process containers were designed to isolate certain resources, such as CPU, memory, disk I/O, or the network, to a group of processes. This concept is what is now known as control groups (also known as cgroups).

This following diagram shows a rough timeline regarding the introduction of containers to enterprise environments:

A few years later, a container manager implementation was released to provide an easy way to control the usage of cgroups, while also integrating Linux namespaces. This project was named Linux Containers (LXC), is still available today, and was crucial for others in finding an easy way to improve process isolation usage.

In 2013, a new vision of how containers should run on our environments was introduced, providing an easy-to-use interface for containers. It started with an open source solution, and Solomon Hykes, among others, started what became known as Docker, Inc. They quickly provided a set of tools for running, creating, and sharing containers with the community. Docker, Inc. started to grow very rapidly as containers became increasingly popular.

Containers have been a great revolution for our applications and infrastructures and we are going to explore this area further as we progress.