Book Image

Learning Docker - Second Edition

By : Vinod Singh, Pethuru Raj, Jeeva S. Chelladhurai
Book Image

Learning Docker - Second Edition

By: Vinod Singh, Pethuru Raj, Jeeva S. Chelladhurai

Overview of this book

Docker is an open source containerization engine that offers a simple and faster way for developing and running software. Docker containers wrap software in a complete filesystem that contains everything it needs to run, enabling any application to be run anywhere – this flexibily and portabily means that you can run apps in the cloud, on virtual machines, or on dedicated servers. This book will give you a tour of the new features of Docker and help you get started with Docker by building and deploying a simple application. It will walk you through the commands required to manage Docker images and containers. You’ll be shown how to download new images, run containers, list the containers running on the Docker host, and kill them. You’ll learn how to leverage Docker’s volumes feature to share data between the Docker host and its containers – this data management feature is also useful for persistent data. This book also covers how to orchestrate containers using Docker compose, debug containers, and secure containers using the AppArmor and SELinux security modules.
Table of Contents (13 chapters)

Differentiating between containerization and virtualization

It is pertinent, and it is paramount for extracting and expounding the game-changing advantages of the Docker-inspired containerization movement over the widely used and fully matured virtualization paradigm. As elucidated earlier, virtualization is the breakthrough idea and game-changing trendsetter for the unprecedented adoption of cloudification, which enables the paradigm of IT industrialization. However, through innumerable real-world case studies, cloud service providers have come to the conclusion that the virtualization technique has its own drawbacks and hence the containerization movement took off powerfully.

Containerization has brought in strategically sound optimizations through a few crucial and well-defined rationalizations and the insightful sharing of compute resources. Some of the innate and hitherto underutilized capabilities of the Linux kernel have been rediscovered. A few additional capabilities too are being embedded to strengthen the process and applicability of containerization. These capabilities have been praised for bringing in the much-wanted automation and acceleration, which will enable the fledgling containerization idea to reach greater heights in the days ahead. The noteworthy business and technical advantages of containerization include bare metal-scale performance, real-time scalability, higher availability, IT DevOps, software portability, and so on. All the unwanted bulges and flabs are being sagaciously eliminated to speed up the roll-out of hundreds of application containers in seconds. The following diagram on the left-hand side depicts the virtualization aspect, whereas the diagram on the right-hand side vividly illustrates the simplifications that are being achieved in containers:

Type 1 Virtualization versus Containerization

As we all know, there are two main virtualization types. In Type 1 virtualization, the hypervisor provides the OS functionalities plus the VM provisioning, monitoring, and management capabilities and hence there is no need for any host OS. VMware ESXi is the leading Type 1 virtualization hypervisor. The production environments and mission-critical applications are run on the Type 1 virtualization.

Type 2 virtualization versus Containerization

The second one is the Type 2 virtualization, wherein the hypervisor runs on the host OS as shown in the preceding figure. This additional layer impacts the system performance and hence generally Type 2 virtualization is being used for development, testing, and staging environments. The Type 2 virtualization greatly slows down the performance because of the involvement of multiple modules during execution. Here, the arrival of Docker-enabled containerization brings forth a huge boost to the system performance.

In summary, VMs are a time-tested and battle-hardened software stack and there are a number of enabling tools to manage the OS and applications on it. The virtualization tool ecosystem is consistently expanding. Applications in a VM are hidden from the host OS through the hypervisor. However, Docker containers do not use a hypervisor to provide the isolation. With containers, the Docker host uses the process and filesystem isolation capabilities of the Linux kernel to guarantee the much-demanded isolation.

Docker containers need a reduced disk footprint as they don't include the entire OS. Setup and startup times are therefore significantly lower than in a typical VM. The principal container advantage is the speed with which application code can be developed, composed, packaged, and shared widely. Containers emerge as the most prominent and dominant platform for the speedier creation, deployment, and delivery of microservices-based distributed applications. With containers, there is a lot of noteworthy saving of IT resources as containers consume less memory space.

Great thinkers have come out with a nice and neat comparison between VMs and containers. They accentuate thinking in terms of a house (VM) and an apartment complex. The house, which has its own plumbing, electrical, heating, and protection from unwanted visitors, is self-contained. An apartment complex has the same resources as a house, such as electrical, plumbing, and heating, but they are shared among all the units. The individual apartments come in various sizes and you only rent what you need, not the entire complex. The apartment flats are containers, with the shared resources being the container host.

Developers can use simple and incremental commands to create a fixed image that is easy to deploy and can automate building those images using a Dockerfile. Developers can share those images easily using simple, Git-style push and pull commands to public or private Docker registries. Since the inception of the Docker technology, there is an unprecedented growth of third-party tools for simplifying and streamlining Docker-enabled containerization.

The latest additions to the Docker platform

Containers are primarily presented as the next-generation application delivery platform. Containers are bringing forth a kind of mechanism for efficiently virtualizing the OS for the sole purpose of running applications on a single kernel host. Applications can also include the fast-emerging microservices. The open source Docker platform is now available primarily in two variants:

  • Docker Enterprise Edition (Docker EE): This is designed for enterprise development and IT teams who build, ship, and run business-critical applications in production at scale. Docker EE is integrated, certified, and supported to provide enterprises with the most secure container platform in the industry to modernize all applications.
  • Docker Community Edition (Docker CE): This is ideal for developers and small teams looking to get started with Docker and experimenting with container-based applications. Docker CE is available on many platforms, from desktop to cloud to the server. Docker CE is available for macOS and Windows and provides a native experience to help you focus on learning Docker. You can build and share containers and automate the development pipeline, all from a single environment.

Windows containers

Docker and Microsoft have entered into a long-lasting partnership to bring the much-needed agility, portability, and security benefits of the Docker platform to every edition of Windows Server 2016. Organizations that upgrade their servers to this new OS will then be able to use containers right from the development to the production environments. Windows uses namespace isolation, resource control, and process-isolation mechanisms to restrict the files, network ports, and running processes that each container can access. This isolation ensures applications running in containers can't interact with or see other applications running on the host OS or in other containers. Microsoft includes two different types of container. The first type is based on the Windows Server core image and is called a Windows Server container. The second one is called a Hyper-V container and is based on the Windows Nano Server image.

Windows Server containers share the underlying OS kernel. This architecture enables faster startup and efficient packaging while delivering the capability to run a number of containers per host. Containers share the local data and APIs with lower isolation levels between each. These containers are best for homogenous applications that do not require strong isolation and security constraints. Large microservice applications composed of multiple containers can use Windows Server containers for performance and efficiency.

Hyper-V containers offer the best of both worlds: VMs and containers. Since each container gets a dedicated copy of Windows kernel and memory, Hyper-V containers have better isolation and security levels than Windows Server containers. The containers are more secure because the interaction with the host operating system and other containers is minimal. This limited sharing of resources also increases the startup time and the size of packaged containers.

Hyper-V containers are preferred in multi-tenant environments such as public clouds. Here is a summary of Windows container jargon with descriptions:

  • Container Host: Physical or VM configured with the Windows container feature.
  • Container Image: A container image contains the base OS, application, and all the application dependencies that are needed to quickly deploy a container.
  • Container OS Image: The container OS image is the OS.
  • Container Registry: Container images are stored in a container registry and can be downloaded on demand. A registry can be off- or on-premise.
  • Docker Engine: It is the core of the open source Docker platform. It is a lightweight container runtime that builds and runs Docker containers.
  • Dockerfile: Dockerfile is used by developers to build and automate the creation of container images. With a Dockerfile, the Docker daemon can automatically build a container image.

Microsoft has its own public and official repository available via this URL: https://hub.docker.com/u/microsoft/. Amazon Web Services (AWS) has begun supporting Windows containers, providing a more direct way for older applications to jump into the cloud.

Windows containers provide the same advantages as Linux containers for applications that run on Windows. Windows containers support the Docker image format and Docker API. However, they can also be managed using PowerShell. Two container runtimes are available with Windows containers, Windows Server containers, and Hyper-V containers. Hyper-V containers provide an additional layer of isolation by hosting each container in a super-optimized VM.

This addresses the security concerns of running containers on top of an OS. Further, it also enhances the container density in a compute instance. That is, by running multiple containers in Hyper-V VMs, you can effectively take your density count to another level and run hundreds of containers on a single host. Windows containers are just Docker containers. Currently, you can deploy Windows containers in Windows Server 2016 (Full, Core, or Nano Server Editions), Windows 10 (Enterprise and Professional Editions), as well as Azure. You can deploy and manage these containers from any Docker client, including the Windows command line when the Docker Engine is installed. You can also manage them from PowerShell, which is open source software.

In this book, we have focused on the Docker CE.