Book Image

Developing with Docker

By : Jaroslaw Krochmalski, Jarosław Krochmalski
Book Image

Developing with Docker

By: Jaroslaw Krochmalski, Jarosław Krochmalski

Overview of this book

This fast-paced practical guide will get you up and running with Docker. Using Docker, you will be able to build, ship, and run many distributed applications in real time. You will start with quickly installing Docker and start working with images and containers. We will present different types of containers and their applications, and show you how to find and build images. You will learn how you can contribute to the image repository by publishing different images. This will familiarize you with the image building process and you will be able to successfully run your programs within containers. By finishing this book, you will be well equipped in deploying your applications using Docker and will have a clear understanding of concepts, techniques, and practical methods to get it running in production systems.
Table of Contents (16 chapters)
Developing with Docker
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface

Tools overview


On Windows, depending on the Windows version you use, there are two choices. It can be either Docker for Windows if you are on Windows 10 or later, or Docker Toolbox for all earlier versions of Windows. The same applies to MacOS. The newest offering is Docker for Mac, which runs as a native Mac application and uses xhyve to virtualize the Docker Engine environment and Linux Kernel. For earlier version of Mac that doesn't meet the Docker for Mac requirements (we are going to list them in Chapter 2, Installing Docker) you should pick the Docker Toolbox for Mac. The idea behind Docker Toolbox and Docker native applications are the same—to virtualize the Linux kernel and Docker Engine on top of your operating system. For the purpose of this book, we will be using Docker Toolbox, as it is universal; it will run in all Windows and MacOS versions. The installation package for Windows and Mac OS is wrapped in an executable called the Docker Toolbox. The package contains all the tools you need to begin working with Docker. Of course there are tons of additional third-party utilities compatible with Docker, and some of them very useful. We will present some of them briefly in Chapter 9, Using Docker in Development. But for now, let's focus on the default toolset. Before we start the installation, let's look at the tools that the installer package contains to better understand what changes will be made to your system.

Docker Engine and Docker Engine client

Docker is a client-server application. It consists of the daemon that does the important job: builds and downloads images, starts and stops containers and so on. It exposes a REST API that specifies interfaces for interacting with the daemon and is being used for remote management. Docker Engine accepts Docker commands from the command line, such as docker to run the image, docker ps to list running containers, docker images to list images, and so on.

The Docker client is a command-line program that is used to manage Docker hosts running Linux containers. It communicates with the Docker server using the REST API wrapper. You will interact with Docker by using the client to send commands to the server.

Docker Engine works only on Linux. If you want run Docker on Windows or Mac OS, or want to provision multiple Docker hosts on a network or in the cloud, you will need Docker Machine.

Docker Machine

Docker Machine is a fairly new command-line tool created by the Docker team to manage the Docker servers you can deploy containers to. It deprecated the old way of installing Docker with the Boot2Docker utility. Docker Machine eliminates the need to create virtual machines manually and install Docker before starting Docker containers on them. It handles the provisioning and installation process for you behind the scenes. In other words, it's a quick way to get a new virtual machine provisioned and ready to run Docker containers. This is an indispensable tool when developing PaaS (Platform as a Service) architecture. Docker Machine not only creates a new VM with the Docker Engine installed in it, but sets up the certificate files for authentication and then configures the Docker client to talk to it. For flexibility purposes, the Docker Machine introduces the concept of drivers. Using drivers, Docker is able to communicate with various virtualization software and cloud providers. In fact, when you install Docker for Windows or Mac OS, the default VirtualBox driver will be used. The following command will be executed behind the scenes:

docker-machine create --driver=virtualbox default

Another available driver is amazonec2 for Amazon Web Services. It can be used to install Docker on the Amazon's cloud—we will do it later in this chapter. There are a lot of drivers ready to be used, and more are coming all the time. The list of existing official drivers with their documentation is always available at the Docker Drivers website: https://docs.docker.com/machine/drivers.

The list contains the following drivers at the moment:

  • Amazon Web Services

  • Microsoft Azure

  • Digital Ocean

  • Exoscale

  • Google Compute Engine

  • Generic

  • Microsoft Hyper-V

  • OpenStack

  • Rackspace

  • IBM Softlayer

  • Oracle VirtualBox

  • VMware vCloud Air

  • VMware Fusion

  • VMware vSphere

Apart from these, there are also a lot of third-party driver plugins available freely on Internet sites such as GitHub. You can find additional drivers for different cloud providers and virtualization platforms, such as OVH Cloud or Parallels for Mac OS, for example, you are not limited to Amazon's AWS or Oracle's VirtualBox. As you can see, the choice is very broad.

Tip

If you cannot find a specific driver for your cloud provider, try looking for it on the GitHub.

When installing the Docker Toolbox on Windows or Mac OS, Docker Machine will be selected by default. It's mandatory and currently the only way to run Docker on these operating systems. Installing the Docker Machine is not obligatory for Linux—there is no need to virtualize the Linux kernel there. However, if you want to deal with the cloud providers or just want to have common runtime environment portable between Mac OS, Windows, and Linux, you can install Docker Machine for Linux as well. We will describe the process later in this chapter. Docker Machine will be also used behind the scenes when using the graphical tool Kitematic, which we will present in a while.

After the installation process, Docker Machine will be available as a command-line tool: docker-machine.

Kitematic

Kitematic is the software tool you can use to run containers through a plain, yet robust graphical user interface (GUI). In 2015, Docker acquired the Kitematic team, expecting to attract many more developers and hoping to open up the containerization solution to more developers and a wider, more general public.

Kitematic is now included by default when installing Docker Toolbox on Mac OS and MS Windows. You can use it to comfortably search and fetch the images you need from Docker Hub. The tool can also be used to run your own app containers. Using the GUI, you can edit environment variables, map ports, configure volumes, study logs, and have command-line access to the containers. It is worth mentioning that you can seamlessly switch between the Kitematic GUI and command-line interface to run and manage application containers. Kitematic is very convenient, however, if you want to have more control when dealing with the containers or just want to use scripting - the command line will be a better solution. In fact, Kitematic allows you to switch back and forth between the Docker CLI and the GUI. Any changes you make on the command-line interface will be directly reflected in Kitematic. The tool is simple to use, as you will see at the end of this chapter, when we are going to test our setup on Mac or Windows PC. For the rest of the book, we will be using the command-line interface for working with Docker.

Docker compose

Compose is a tool, executed from the command line as docker-compose. It replaces the old fig utility. It's used to define and run multicontainer Docker applications. Although it's very easy to imagine a multi-container application (such as a web server in one container and a database in the other), it's not mandatory. So if you decide that your application will fit in a single Docker container, there will be no use for docker-compose. In real life, it's very likely that your application will span into multiple containers. With docker-compose, you use a compose file to configure your application's services, so they can be run together in an isolated environment. Then, using a single command, you create and start all the services from your configuration. When it comes to multicontainer applications, docker-compose is great for development and testing, as well as continuous integration workflows.

We will use docker-compose to create multicontainer applications in Chapter 6Creating Images, later in this book.

Oracle VirtualBox

Oracle VM VirtualBox is a free and open source hypervisor for x86 computers from Oracle. It will be installed by default when installing the Docker Toolbox. It supports the creation and management of virtual machines running Windows, Linux, BSD, OS/2, Solaris, and so on. In our case, the Docker Machine using VirtualBox driver, will use VirtualBox to create and boot a bitsy Linux distribution capable of the running docker-engine. It's worth mentioning that you can also run the teensy-weensy virtualized Linux straight from the VirtualBox itself.

Every Docker Machine you have created using the docker-machine or Kitematic,will be visible and available to boot in the VirtualBox, when you run it directly, as shown in the following screenshot:

You can start, stop, reset, change settings, and read logs in the same way as for other virtualized operating systems.

Tip

You can use VirtualBox in Windows or Mac for other purposes than Docker.

Git

Git is a distributed version control system that is widely used for software development and other version control tasks. It has emphasis on speed, data integrity, and support for distributed, non-linear workflows. Docker Machine and Docker client follows the pull/push model of Git for fetching the needed dependencies from the network. For example, if you decide to run the Docker image that is not present on your local machine, Docker will fetch this image from Docker Hub. Docker doesn't internally use Git for any kind of resource versioning. It does, however, rely on hashing to uniquely identify the filesystem layers, which is very similar to what Git does. Docker also takes initial inspiration in the notion of commits, pushes, and pulls. Git is also included in the Docker Toolbox installation package.

From a developer's perspective, there are tools especially useful in a programmer's daily job, be it IntelliJ IDEA Docker Integration Plugin for Java fans or Visual Studio 2015 Tools for Docker for those who prefer C#. They let you download and build Docker images, create and start containers, and carry out other related tasks straight from your favorite IDE. We will cover them in more detail in the next chapters.

Apart from the tools included in the Docker's distribution package (it will be Docker Toolbox for older versions of Windows or Docker for Windows and Docker for Mac), there are hundreds of third-party tools, such as Kubernetes and Helios (for Docker orchestration), Prometheus (for monitoring of statistics), or Swarm and Shipyard for managing clusters. As Docker captures higher attention, more and more Dockerrelated tools pop up almost every week. We will try to briefly cover the most interesting ones in Chapter 9, Using Docker in Development, and more resources.

But these are not the only tools available for you. Additionally, Docker provides a set of APIs that can be very handy. One of them is the Remote API for the management of the images and containers. Using this API, you will be able to distribute your images to the runtime Docker engine. The container can be shifted to a different machine that runs Docker, and executed there without compatibility concerns. This may be especially useful when creating PaaS (Platform-as-a-Service) architectures. There's also the Stats API that will expose live resource usage information (such as CPU, memory, network I/O, and block I/O) for your containers. This API endpoint can be used to create tools that show how your containers behave, for example, on a production system.