Book Image

Docker Orchestration

By : Randall Smith
Book Image

Docker Orchestration

By: Randall Smith

Overview of this book

Docker orchestration is what you need when transitioning from deploying containers individually on a single host to deploying complex multi-container apps on many machines. This book covers the new orchestration features of Docker 1.12 and helps you efficiently build, test, and deploy your application using Docker. You will be shown how to build multi-container applications using Docker Compose. You will also be introduced to the building blocks for multi-host Docker clusters such as registry, overlay networks, and shared storage using practical examples. This book gives an overview of core tools such as Docker Machine, Swarm, and Compose which will enhance your orchestration skills. You’ll learn how to set up a swarm using the decentralized building block. Next, you’ll be shown how to make the most out of the in-built orchestration feature of Docker engine and you’ll use third-party tools such as Kubernetes, Mesosphere, and CoreOS to orchestrate your existing process. Finally, you will learn to deploy cluster hosts on cloud services and automate your infrastructure.
Table of Contents (17 chapters)
Docker Orchestration
About the Author
About the Reviewer
Customer Feedback

Chapter 1. Getting Started with Docker Orchestration

Initially, Internet services ran on hardware and life was okay. To scale services to handle peak capacity, one needed to buy enough hardware to handle the load. When the load was no longer needed, the hardware sat unused or underused but ready to serve. Unused hardware is a waste of resources. Also, there was always the threat of configuration drift because of the subtle changes we made with each new install.

Then came VMs and life was good. VMs could be scaled to the size that was needed and no more. Multiple VMs could be run on the same hardware. If there was an increase in demand, new VMs could be started on any physical server that had room. More work could be done on less hardware. Even better, new VMs could be started in minutes when needed, and destroyed when the load slackened. It was even possible to outsource the hardware to companies such as Amazon, Google, and Microsoft. Thus elastic computing was born.

VMs, too, had their problems. Each VM required that additional memory and storage space be allocated to support the operating system. In addition, each virtualization platform had its own way of doing things. Automation that worked with one system had to be completely retooled to work with another. Vendor lock-in became a problem.

Then came Docker. What VMs did for hardware, Docker does for the VM. Services can be started across multiple servers and even multiple providers. Once deployed, containers can be started in seconds without the resource overhead of a full VM. Even better, applications developed in Docker can be deployed exactly as they were built, minimizing the problems of configuration drift and package maintenance.

The question is: how does one do it? That process is called orchestration, and like an orchestra, there are a number of pieces needed to build a working cluster. In the following chapters, I will show a few ways of putting those pieces together to build scalable, reliable services with faster, more consistent deployments.

Let's go through a quick review of the basics so that we are all on the same page. The following topics will be covered:

  • How to install Docker Engine on Amazon Web Services (AWS), Google Compute Engine (GCE), Microsoft Azure, and a generic Linux host with docker-machine

  • An introduction to Docker-specific distributions including CoreOS, RancherOS, and Project Atomic

  • Starting, stopping, and inspecting containers with Docker

  • Managing Docker images