Book Image

Learning Docker - Second Edition

By : Vinod Singh, Pethuru Raj, Jeeva S. Chelladhurai
Book Image

Learning Docker - Second Edition

By: Vinod Singh, Pethuru Raj, Jeeva S. Chelladhurai

Overview of this book

Docker is an open source containerization engine that offers a simple and faster way for developing and running software. Docker containers wrap software in a complete filesystem that contains everything it needs to run, enabling any application to be run anywhere – this flexibily and portabily means that you can run apps in the cloud, on virtual machines, or on dedicated servers. This book will give you a tour of the new features of Docker and help you get started with Docker by building and deploying a simple application. It will walk you through the commands required to manage Docker images and containers. You’ll be shown how to download new images, run containers, list the containers running on the Docker host, and kill them. You’ll learn how to leverage Docker’s volumes feature to share data between the Docker host and its containers – this data management feature is also useful for persistent data. This book also covers how to orchestrate containers using Docker compose, debug containers, and secure containers using the AppArmor and SELinux security modules.
Table of Contents (13 chapters)

The key drivers for Dockerization

The first and foremost driver for Docker-enabled containerization is to competently and completely overcome the widely expressed limitations of the virtualization paradigm. Actually, we have been working on proven virtualization techniques and tools for quite a long time now in order to realize the much-demanded software portability. That is, with the goal of eliminating the inhibiting dependency between software and hardware there have been several right initiatives that include the matured and stabilized virtualization paradigm. Virtualization is a kind of beneficial abstraction, accomplished through the incorporation of an additional layer of indirection between hardware resources and software components. Through this freshly introduced abstraction layer (hypervisor or Virtual Machine Monitor (VMM)), any kind of software applications can run on any underlying hardware without any hitch or hurdle. In short, the longstanding goal of software portability is trying to achieve through this middleware layer. However, the much-published portability target is not fully met even by the virtualization technique. The hypervisor software from different vendors gets in the way of ensuring the much-needed application portability. Further, the distribution, version, edition, and patching differences of operating systems and application workloads hinder the smooth portability of workloads across systems and locations.

Similarly, there are various other drawbacks attached to the virtualization paradigm. In data centers and server farms, the virtualization technique is typically used to create multiple Virtual Machines (VMs) out of physical machines and each VM has its own Operating System (OS). Through this solid and sound isolation enacted through automated tools and controlled resource-sharing, multiple and heterogeneous applications are being accommodated in a physical machine. That is, the hardware-assisted virtualization enables disparate applications to be run simultaneously on a single physical server. With the virtualization paradigm, various kinds of IT infrastructure (server machines, storage appliances, and networking solutions) become open, programmable, remotely monitorable, manageable, and maintainable. However, because of the verbosity and bloatedness (every VM carries its own OS), VM provisioning typically takes a few minutes and this longer duration is not acceptable for production environments.

The other widely expressed drawback closely associated with virtualization is that the performance of virtualized systems also goes down due to the excessive usage of precious and expensive IT resources (processing, memory, storage, network bandwidth, and so on). Besides the longer runtime, the execution time of VMs is on the higher side because of multiple layers ranging from the guest OS, hypervisor, and the underlying hardware.

Finally, the compute virtualization has flourished, whereas the other closely associated network and storage virtualization concepts are just taking off; precisely speaking, building distributed applications and fulfilling varying business expectations mandate the faster and flexible provisioning, high availability, reliability, scalability, and maneuverability of all the participating IT resources. Computing, storage, and networking components need to work together in accomplishing the varying IT and business needs. This sharply increments the management complexity of virtual environments.

Enter the world of containerization. All the aforementioned barriers get resolved in a single stroke. That is, the evolving concept of application containerization coolly and confidently contributes to the unprecedented success of the software portability goal. A container generally contains an application. Along with the primary application, all of its relevant libraries, binaries, and other dependencies are stuffed and squeezed together to be packaged and presented as a comprehensive yet compact container to be readily shipped, run, and managed in any local as well as remote environments. Containers are exceptionally lightweight, highly portable, rapidly deployable, extensible, and so on. Further on, many industry leaders have come together to form a kind of consortium to embark on a decisive journey towards the systematic production, packaging, and delivery of industry-strength and standardized containers. This conscious and collective move makes Docker deeply penetrative, pervasive, and persuasive. The open source community is simultaneously spearheading the containerization conundrum through an assortment of concerted activities for simplifying and streamlining the containerization concept. These containerization life cycle steps are being automated through a variety of tools.

The Docker ecosystem is also growing rapidly in order to bring in as much automation as possible in the containerization landscape. Container clustering and orchestration are gaining a lot of ground; thus, geographically distributed containers and their clusters can be readily linked up to produce bigger and better application-aware containers. The distributed nature of cloud centers is, therefore, to get benefited immensely with all the adroit advancements gaining a strong foothold in the container space. Cloud service providers and enterprise IT environments are all set to embrace this unique technology in order to escalate the resource utilization and to take the much-insisted infrastructure optimization to the next level. On the performance side, plenty of tests demonstrate Docker containers achieving native system performance. In short, IT agility through the DevOps aspect is being guaranteed through the smart leverage of Dockerization, and this in turn leads to business agility, adaptivity, and affordability.