Book Image

Getting Started with Containerization

By : Dr. Gabriel N. Schenker, Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu
Book Image

Getting Started with Containerization

By: Dr. Gabriel N. Schenker, Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu

Overview of this book

Kubernetes is an open source orchestration platform for managing containers in a cluster environment. This Learning Path introduces you to the world of containerization, in addition to providing you with an overview of Docker fundamentals. As you progress, you will be able to understand how Kubernetes works with containers. Starting with creating Kubernetes clusters and running applications with proper authentication and authorization, you'll learn how to create high-availability Kubernetes clusters on Amazon Web Services (AWS), and also learn how to use kubeconfig to manage different clusters. Whether it is learning about Docker containers and Docker Compose, or building a continuous delivery pipeline for your application, this Learning Path will equip you with all the right tools and techniques to get started with containerization. By the end of this Learning Path, you will have gained hands-on experience of working with Docker containers and orchestrators, including SwarmKit and Kubernetes. This Learning Path includes content from the following Packt products: • Kubernetes Cookbook - Second Edition by Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu • Learn Docker - Fundamentals of Docker 18.x by Gabriel N. Schenker
Table of Contents (25 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Container architecture


Now, let's discuss on a high level how a system that can run Docker containers is designed. The following diagram illustrates what a computer on which Docker has been installed looks like. By the way, a computer which has Docker installed is often called a Docker host, because it can run or host Docker containers:

High-level architecture diagram of the Docker engine

In the preceding diagram, we see three essential parts:

  • On the bottom, we have the Linux operating system
  • In the middle dark gray, we have the container runtime
  • On the top, we have the Docker engine

Containers are only possible due to the fact that the Linux OS provides some primitives, such as namespaces, control groups, layer capabilities, and more which are leveraged in a very specific way by the container runtime and the Docker engine. Linux kernel namespaces such as process ID (pid) namespaces or network(net) namespaces allow Docker to encapsulate or sandbox processes that run inside the container. Control groups make sure that containers cannot suffer from the noisy neighbor syndrome, where a single application running in a container can consume most or all of the available resources of the whole Docker host. Control groups allow Docker to limit the resources, such as CPU time or the amount of RAM that each container gets maximally allocated.

The container runtime on a Docker host consists of containerd and runc. runc is the low-level functionality of the container runtime and containerd, which is based on runc, provides the higher-level functionality. Both are open source and have been donated by Docker to the CNCF.

The container runtime is responsible for the whole life cycle of a container. It pulls a container image (which is the template for a container) from a registry if necessary, creates a container from that image, initializes and runs the container, and eventually stops and removes the container from the system when asked. 

The Docker engine provides additional functionality on top of the container runtime, such as network libraries or support for plugins. It also provides a REST interface over which all container operations can be automated. The Docker command-line interface that we will use frequently in this book is one of the consumers of this REST interface.