Book Image

The DevOps 2.1 Toolkit: Docker Swarm

By : Viktor Farcic
Book Image

The DevOps 2.1 Toolkit: Docker Swarm

By: Viktor Farcic

Overview of this book

Viktor Farcic's latest book, The DevOps 2.1 Toolkit: Docker Swarm, takes you deeper into one of the major subjects of his international best seller, The DevOps 2.0 Toolkit, and shows you how to successfully integrate Docker Swarm into your DevOps toolset. Viktor shares with you his expert knowledge in all aspects of building, testing, deploying, and monitoring services inside Docker Swarm clusters. You'll go through all the tools required for running a cluster. You'll travel through the whole process with clusters running locally on a laptop. Once you're confident with that outcome, Viktor shows you how to translate your experience to different hosting providers like AWS, Azure, and DigitalOcean. Viktor has updated his DevOps 2.0 framework in this book to use the latest and greatest features and techniques introduced in Docker. We'll go through many practices and even more tools. While there will be a lot of theory, this is a hands-on book. You won't be able to complete it by reading it on the metro on your way to work. You'll have to read this book while in front of the computer and get your hands dirty.
Table of Contents (22 chapters)
Title Page
Credits
About the Author
www.PacktPub.com
Customer Feedback
Preface
11
Embracing Destruction: Pets versus Cattle

Pushing images to the registry


Before we push our go-demo image, we need a place to push to. Docker offers multiple solutions that act as a registry. We can use Docker Hub (https://hub.docker.com/), Docker Registry (https://docs.docker.com/registry/), and Docker Trusted Registry (https://docs.docker.com/docker-trusted-registry/). On top of those, there are many other solutions from third party vendors.

Which registry should we use? Docker Hub requires a username and password, and I do not trust you enough to provide my own. One of the goals I defined before I started working on the book is to use only open source tools so Docker Trusted Registry, while being an excellent choice under different circumstances, is also not suitable. The only option left (excluding third party solutions), is Docker Registry (https://docs.docker.com/registry/).

The registry is defined as one of the services inside the docker-compose-local.yml (https://github.com/vfarcic/go-demo/blob/master/docker-compose-local.yml) Compose file. The definition is as follows:

  registry:
    container_name: registry
    image: registry:2.5.0
    ports:
      -5000:5000
    volumes:
      - .:/var/lib/registry
    restart: always

We set registry as an explicit container name, specified the image, and opened the port 5000 (both on the host and inside the container).

Registry stores the images inside the /var/lib/registry directory, so we mounted it as a volume on the host. That way, data will not be lost if the container fails. Since this is a production service that could be used by many, we defined that it should always be restarted on failure.

Let's run the following commands:

docker-compose \ 
    -f docker-compose-local.yml \ 
    up -d registry

 

Now that we have the registry, we can do a dry-run. Let's confirm that we can pull and push images to it:

docker pull alpine

docker tag alpine localhost:5000/alpine

docker push localhost:5000/alpine

Docker uses a naming convention to decide where to pull and push images from. If the name is prefixed with an address, the engine will use it to determine the location of the registry. Otherwise, it assumes that we want to use Docker Hub. Therefore, the first command pulled the alpine image from Docker Hub.

The second command created a tag of the alpine image. The tag is a combination of the address of our registry localhost:5000 and the name of the image. Finally, we pushed the alpine image to the registry running on the same server.

Before we start using the registry in a more serious fashion, let's confirm that the images are indeed persisted on the host:

ls -1 docker/registry/v2/repositories/alpine/

The output is as follows:

_layers
_manifests
_uploads

I won't go into details what each of those sub-directories contains. The important thing to note is that registry persists the images on the host so no data will be lost if it fails or, in this case, even if we destroy the VM since that Machine directory is mapped to the same directory on our laptop.

We were a bit hasty when we declared that this registry should be used in production. Even though data is persisted, if the whole VM crashes, there would be a downtime until someone brings it up again or creates a new one. Since one of the goals is to avoid downtime whenever possible, later on, we should look for a more reliable solution. The current setup should do for now.

Now we are ready to push the go-demo image to the registry:

docker tag go-demo localhost:5000/go-demo:1.0

docker push localhost:5000/go-demo:1.0

As with the Alpine example, we tagged the image with the registry prefix and pushed it to the registry. We also added a version number 1.0.

The push was the last step in the CI flow. We run unit tests, built the binary, built the Docker image, ran staging tests, and pushed the image to the registry. Even though we did all those things, we are not yet confident that the service is ready for production. We never tested how it would behave when deployed to a production (or production-like) cluster. We did a lot, but not enough.

If CI were our final objective, this would be the moment when manual validations should occur. While there is a lot of value in manual labor that requires creativity and critical thinking, we cannot say the same for repetitive tasks. Tasks required for converting this Continuous Integration flow into Continuous Delivery and, later on, deployment are, indeed repetitive.

We have the CI process done, and it is time to do the extra mile and convert it into Continuous Delivery.

Before we move into the steps required for the Continuous Integration process to become Continuous Delivery, we need to take a step back and explore cluster management. After all, in most cases, there is no production environment without a cluster.

We'll destroy the VMs at the end of each chapter. That way, you can come back to any of part of the book and do the exercises without the fear that you might need to do some steps from one of the earlier chapters. Also, such a procedure will force us to repeat a few things. Practice makes perfect. To reduce your waiting times, I did my best to keep things as small as possible and keep download times to a minimum. Execute the following command:

docker-machine rm -f go-demo

The next chapter is dedicated to the setup and operation of a Swarm cluster.