Book Image

The DevOps 2.1 Toolkit: Docker Swarm

By : Viktor Farcic
Book Image

The DevOps 2.1 Toolkit: Docker Swarm

By: Viktor Farcic

Overview of this book

Viktor Farcic's latest book, The DevOps 2.1 Toolkit: Docker Swarm, takes you deeper into one of the major subjects of his international best seller, The DevOps 2.0 Toolkit, and shows you how to successfully integrate Docker Swarm into your DevOps toolset. Viktor shares with you his expert knowledge in all aspects of building, testing, deploying, and monitoring services inside Docker Swarm clusters. You'll go through all the tools required for running a cluster. You'll travel through the whole process with clusters running locally on a laptop. Once you're confident with that outcome, Viktor shows you how to translate your experience to different hosting providers like AWS, Azure, and DigitalOcean. Viktor has updated his DevOps 2.0 framework in this book to use the latest and greatest features and techniques introduced in Docker. We'll go through many practices and even more tools. While there will be a lot of theory, this is a hands-on book. You won't be able to complete it by reading it on the metro on your way to work. You'll have to read this book while in front of the computer and get your hands dirty.
Table of Contents (22 chapters)
Title Page
Credits
About the Author
www.PacktPub.com
Customer Feedback
Preface
11
Embracing Destruction: Pets versus Cattle

Running unit tests and building service binary


We'll use Docker Compose for the CI flow. As you will see soon, Docker Compose has little, if any, value when operating the cluster. However, for operations that should be performed on a single machine, Docker Compose is still the easiest and the most reliable way to go.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration. Compose is great for development, testing, and staging environments, as well as CI workflows.

The repository that we cloned earlier, already has all the services we'll need defined inside the docker-compose-test-local.yml (https://github.com/vfarcic/go-demo/blob/master/docker-compose-test-loc) file.

Let's take a look at the content of the docker-compose-test-local.yml (https://github.com/vfarcic/go-demo/blob/master/docker-compose-test-local.yml) file:

cat docker-compose-test-local.yml

The service we'll use for our unit tests is called unit. It is as follows:

unit:
    image:golang:1.6
    volumes:
      - .:/usr/src/myapp
      - /tmp/go:/go
    working_dir: /usr/src/myapp
    command: bash -c "go get -d -v -t && go test --cover -v \
./... && go build -v-o go-demo"

It is a relatively simple definition. Since the service is written in Go, we are using the golang:1.6 image.

Next, we are exposing a few volumes. Volumes are directories that are, in this case, mounted on the host. They are defined with two arguments. The first argument is the path to the host directory while the second represents a directory inside the container. Any file already inside the host directory will be available inside the container and vice versa.

The first volume is used for the source files. We are sharing the current host directory . with the container directory /usr/src/myapp. The second volume is used for Go libraries. Since we want to avoid downloading all the dependencies every time we run unit tests, they will be stored inside the host directory /tmp/go. That way, dependencies will be downloaded only the first time we run the service.

Volumes are followed with the working_dir instruction. When the container is run, it will use the specified value as the starting directory.

Finally, we are specifying the command we want to run inside the container. I won't go into details since they are specific to Go. In short, we download all the dependencies go get -d -v -t, run unit tests go test --cover -v ./..., and build the go-demo binary go build -v -o go-demo. Since the directory with the source code is mounted as a volume, the binary will be stored on the host and available for later use.

With this single Compose service, we defined two steps of the CI flow. It contains unit tests and build of the binary.

Please note that even though we run the service called unit, the real purpose of this CI step is to run any type of tests that do not require deployment. Those are the tests we can execute before we build the binary and, later on, Docker images.

Let's run the following code:

docker-compose \ 
    -f docker-compose-test-local.yml \ 
    run --rm unit

Note

A note to Windows users   You might experience a problem with volumes not being mapped correctly. If you see an Invalid volume specification error, please export the environment variable COMPOSE_CONVERT_WINDOWS_PATHS set to  0:export COMPOSE_CONVERT_WINDOWS_PATHS=0 If that fixed the problem with volumes, please make sure that the variable is exported every time you run docker-compose.

We specified that Compose should use docker-compose-test-local.yml file (default is docker-compose.yml) and run the service called unit. The --rm argument means that the container should be removed once it stops. The run command should be used for services that are not meant to run forever. It is perfect for batch jobs and, as in this case, for running tests.

As you can see from the output, we pulled the golang image, downloaded service dependencies, successfully ran the tests, and built the binary.

We can confirm that the binary is indeed built and available on the host by listing the files in the current directory using the following command. For brevity, we'll filter the result:

ls -l *go-demo*

Now that we passed the first round of tests and have the binary, we can proceed and build the Docker images.