Book Image

Docker Certified Associate (DCA): Exam Guide

By : Francisco Javier Ramírez Urea
Book Image

Docker Certified Associate (DCA): Exam Guide

By: Francisco Javier Ramírez Urea

Overview of this book

Developers have changed their deployment artifacts from application binaries to container images, and they now need to build container-based applications as containers are part of their new development workflow. This Docker book is designed to help you learn about the management and administrative tasks of the Containers as a Service (CaaS) platform. The book starts by getting you up and running with the key concepts of containers and microservices. You'll then cover different orchestration strategies and environments, along with exploring the Docker Enterprise platform. As you advance, the book will show you how to deploy secure, production-ready, container-based applications in Docker Enterprise environments. Later, you'll delve into each Docker Enterprise component and learn all about CaaS management. Throughout the book, you'll encounter important exam-specific topics, along with sample questions and detailed answers that will help you prepare effectively for the exam. By the end of this Docker containers book, you'll have learned how to efficiently deploy and manage container-based environments in production, and you will have the skills and knowledge you need to pass the DCA exam.
Table of Contents (22 chapters)
1
Section 1 - Key Container Concepts
8
Section 2 - Container Orchestration
12
Section 3 - Docker Enterprise
17
Section 4 - Preparing for the Docker Certified Associate Exam

Docker components

In this section, we are going to describe the main Docker components and binaries used for building, distributing, and deploying containers in all execution stages.

Docker Engine is the core component of container platforms. Docker is a client-server application and Docker Engine will provide the server side. This means that we have the main process that runs as a daemon on the host, and a client-side application that communicates with the server using REST API calls.

Docker Engine's latest version provides separate packages for the client and the server. On Ubuntu, for example, if we take a look at the available packages, we will have something like this:
- docker-ce-cli Docker CLI: The open source application container engine
- docker-ce Docker: The open source application container engine

The following diagram represents Docker daemon and its different levels of management:

Docker daemon listens for Docker API requests and will be responsible for all Docker object actions, such as creating an image, list volumes, and running a container.

Docker API is available using a Unix socket by default. Docker API can be used from within code-using interfaces that are available for many programming languages. Querying for running containers can be managed using a Docker client or its API directly; for example, with curl --no-buffer -XGET --unix-socket /var/run/docker.sock http://localhost/v1.24/containers/json.

When deploying cluster-wide environments with Swarm orchestration, daemons will share information between them to allow the execution of distributed services within the pool of nodes.

On the other hand, the Docker client will provide users with the command line required to interact with the daemon. It will construct the required API calls with their payloads to tell the daemon which actions it should execute.

Now, let's deep dive into a Docker daemon component to find out more about its behavior and usage.

Docker daemon

Docker daemon will usually run as a systemd-managed service, although it can run as a standalone process (it is very useful when debugging daemon errors, for example). As we have seen previously, dockerd provides an API interface that allows clients to send commands and interact with this daemon. containerd, in fact, manages containers. It was introduced as a separate daemon in Docker 1.11 and is responsible for managing storage, networking, and interaction between namespaces. Also, it will manage image shipping and then, finally, it will run containers using another external component. This external component, RunC, will be the real executor of containers. Its function just receives an order to run a container. These components are part of the community, so the only one that Docker provides is dockerd. All other daemon components are community-driven and use standard image specifications (Open Containers Initiative OCI). In 2017, Docker donated containerd as part of their contribution to the open source community and is now part of the Cloud Native Computing Foundation (CNCF). OCI was founded as an open governance structure for the express purpose of creating open industry standards around container formats and runtimes in 2015. The CNCF hosts and manages most of the currently most-used components of the newest technology infrastructures. It is a part of the nonprofit Linux Foundation and is involved in projects such as Kubernetes, Containerd, and The Update Framework.

By way of a summary, dockerd will manage interaction with the Docker client. To run a container, first, the configuration needs to be created so that daemon triggers containerd (using gRPC) to create it. This piece will create an OCI definition that will use RunC to run this new container. Docker implements these components with different names (changed between releases), but the concept is still valid.

Docker daemon can listen for Docker Engine API requests on different types of sockets: unix, tcp, and fd. By default, Daemon on Linux will use a Unix domain socket (or IPC socket) that's created at /var/run/docker.sock when starting the daemon. Only root and Docker groups can access this socket, so only root and members of the Docker group will be able to create containers, build images, and so on. In fact, access to a socket is required for any Docker action.

Docker client

Docker client is used to interact with a server. It needs to be connected to a Docker daemon to perform any action, such as building an image or running a container.

A Docker daemon and client can run on the same host system, or we can manage a connected remote daemon. The Docker client and daemon communicate using a server-side REST API. This communication can be executed over UNIX sockets (by default) or a network interface, as we learned earlier.

Docker objects

The Docker daemon will manage all kinds of Docker objects using the Docker client command line.

The following are the most common objects at the time of writing this book:

  • IMAGE
  • CONTAINER
  • VOLUME
  • NETWORK
  • PLUGIN

There are other objects that are only available when we deploy Docker Swarm orchestration:

  • NODE
  • SERVICE
  • SECRET
  • CONFIG
  • STACK
  • SWARM

The Docker command line provides the actions that Docker daemon is allowed to execute via REST API calls. There are common actions such as list (or ls), create, rm (for remove), and inspect, and other actions that are restricted to specific objects, such as cp (for coping).

For example, we can get a list of running containers on a host by running the following command:

$ docker container ls
There are many commonly used aliases, such as docker ps for docker container ls or docker run for docker container run. I recommend using a long command-line format because it is easier to remember if we understand which actions are allowed for each object.

There are other tools available on the Docker ecosystem, such as docker-machine and docker-compose.

Docker Machine is a community tool created by Docker that allows users and administrators to easily deploy Docker Engine on hosts. It was developed in order to fast provision Docker Engine on cloud providers such as Azure and AWS, but it evolved to offer other implementations, and nowadays, it is possible to use many different drivers for many different environments. We can use docker-machine to deploy docker-engine on VMWare (over Cloud Air, Fusion, Workstation, or vSphere), Microsoft Hyper-V, and OpenStack, among others. It is also very useful for quick labs, or demonstration and test environments on VirtualBox or KVM, and it even allows us to provision docker-engine software using SSH. docker-machine runs on Windows and Linux, and provides an integration between client and provisioned Docker host daemons. This way, we can interact with its Docker daemon remotely, without being connected using SSH, for example.

On the other hand, Docker Compose is a tool that will allow us to run multi-container applications on a single host. We will just introduce this concept here in relation to multi-service applications that will run on Swarm or Kubernetes clusters. We will learn about docker-compose in Chapter 5, Deploying Multi-Container Applications.