Book Image

Extending Docker

By : Russ McKendrick
Book Image

Extending Docker

By: Russ McKendrick

Overview of this book

With Docker, it is possible to get a lot of apps running on the same old servers, making it very easy to package and ship programs. The ability to extend Docker using plugins and load third-party plugins is incredible, and organizations can massively benefit from it. In this book, you will read about what first and third party tools are available to extend the functionality of your existing Docker installation and how to approach your next Docker infrastructure deployment. We will show you how to work with Docker plugins, install it, and cover its lifecycle. We also cover network and volume plugins, and you will find out how to build your own plugin. You’ll discover how to integrate it with Puppet, Ansible, Jenkins, Flocker, Rancher, Packer, and more with third-party plugins. Then, you’ll see how to use Schedulers such as Kubernetes and Amazon ECS. Finally, we’ll delve into security, troubleshooting, and best practices when extending Docker. By the end of this book, you will learn how to extend Docker and customize it based on your business requirements with the help of various tools and plugins.
Table of Contents (15 chapters)

What are the limits?


So, in the previous example, we launched two containers, each running different versions of PHP on top of our (extremely simple) codebase. While it demonstrated how simple it is to launch containers, it also exposed some potential problems and single points of failure.

To start with, our codebase is stored on the host machines filesystem, which means that we can only run the container on our single-host machine. What if it goes down for any reason?

There are a few ways we could get around this with a vanilla Docker installation. The first is use the official PHP container as a base to build our own custom image so that we can ship our code along with PHP. To do this, add Dockerfile to the app1 directory that contains the following content:

### Dockerfile
FROM php:5.6-apache
MAINTAINER Russ McKendrick <[email protected]>
ADD index.php /var/www/html/index.php

We can also build our custom image using the following command:

docker build -t app1:php-5.6 .

When you run the build command, you will see the following output:

Once you have your image built, you could push it as a private image to the Docker Hub or your own self-hosted private registry; another option is to export the custom image as a .tar file and then copy it to each of the instances that need to run your custom PHP container.

To do this, you will run the Docker save command:

docker save app1:php-5.6 > ~/app1-php-56.tar

This will make a copy of our custom image, as you can see from the following terminal output, the image should be around a 482M tar file:

Now that we have a copy of the image as a tar file, we can copy it to our other host machines. Once you have copied the tar file, you will need to run the Docker load command to import it onto our second host:

docker load < ~/app1-php-56.tar

Then we can launch a container that has our code baked in by running the following command:

docker run --name app1 -d -p 80:80 -it app1:php-5.6

The following terminal output gives you an idea of what you should see when importing and running our custom container:

So far so good? Well, yes and no.

It's great that we can add our codebase to a custom image out of the box, then ship the image via either of the following ways:

  • The official Docker Hub

  • Our own private registry

  • Exporting the image as a tar file and copying it across our other hosts

However, what about containers that are processing data that is changing all the time, such as a database? What are our options for a database?

Consider that we are running the official MySQL container from https://hub.docker.com/_/mysql/, we could mount the folder where our databases are stored (that is, /var/lib/mysql/) from the host machine, but that could cause us permissions issues with the files once they are mounted within the container.

To get around this, we could create a data volume that contains a copy of our /var/lib/mysql/ directory, this means that we are keeping our databases separate from our container so that we can stop, start, and even replace the MySQL container without destroying our data.

This approach, however, binds us to running our MySQL container on a single host, which is a big single point of failure.

If we have the resources available, we could make sure that the host where we are hosting our MySQL container has multiple redundancies, such as a number of hard drives in RAID configuration that allows us to weather more than one drive failure. We can have multiple power supply units (PSU) being fed by different power feeds, so if we have any problems with the power from one of our feeds, the host machine stays online.

We can also have the same with the networking on the host machine, NICs plugged into different switches being fed by different power feeds and network providers.

While this does leave us with a lot of redundancy, we are still left with a single host machine, which is now getting quite expensive as all of this redundancy with multiple drives, networking, and power feeds are additional costs on top of what we are already paying for our host machine.

So, what's the solution?

This is where extending Docker comes in, while Docker, out of the box, does not support the moving of volumes between host servers, we can plug in a filesystem extension that allows us to migrate volumes between hosts or mount a volume from a shared filesystem, such as NFS.

If we have this in place for our MySQL container, should there be a problem with the host machine, there will be no problem for us as the data volume can be mounted on another host.

Once we have the volume mounted, it can carry on where it left off, as we have our data on a volume that is being replicated to the new host or is accessible via a filesystem share from some redundant storage, such as a SAN.

The same can also be said for networking. As mentioned in the summary of the IBM research report, Docker NAT-based networking could be a bottleneck when it comes to performance, as well as designing your container infrastructure. If it is a problem, then you can add a networking extension and offload your containers network to a software-defined network (SDN) rather than have the core of Docker manage the networking using NAT and bridged interfaces within iptables on the host machine.

Once you introduce this level of functionality to the core of Docker, it can get difficult to manage your containers. In an ideal world, you shouldn't have to worry about which host your container is running on or if your container/host machine stops responding for any reason, then your containers will not automatically pop up on another host somewhere within your container network and carry on where it left off.

In the following chapters of this book, we will be looking at how to achieve some of the concepts that we have discussed in this chapter, and we will look at tools written by Docker, designed to run alongside the core Docker engine. While these tools may not be as functional as some of the tools we will be looking at in the later chapters, they serve as a good introduction to some of the core concepts that we will be covering when it comes to creating clusters of Docker hosts and then orchestrating your containers.

Once we have looked at these tools, we will look at volume and networking plugins. We will cover a few of the more well-known plugins that add functionality to the Docker core that allows us to have a more redundant platform.

Once we have been hands-on with pre-written plugins, we will look at the best way to approach writing your own plugin.

In the final chapters of the book, we will start to look at third-party tools that allow you to configure, deploy, and manage the whole life cycle of your containers.