Book Image

Docker on Windows

By : Elton Stoneman
Book Image

Docker on Windows

By: Elton Stoneman

Overview of this book

Docker is a platform for running server applications in lightweight units called containers. You can run Docker on Windows Server 2016 and Windows 10, and run your existing apps in containers to get significant improvements in efficiency, security, and portability. This book teaches you all you need to know about Docker on Windows, from 101 to deploying highly-available workloads in production. This book takes you on a Docker journey, starting with the key concepts and simple examples of how to run .NET Framework and .NET Core apps in Windows Docker containers. Then it moves on to more complex examples—using Docker to modernize the architecture and development of traditional ASP.NET and SQL Server apps. The examples show you how to break up monoliths into distributed apps and deploy them to a clustered environment in the cloud, using the exact same artifacts you use to run them locally. To help you move confidently to production, it then explains Docker security, and the management and support options. The book finishes with guidance on getting started with Docker in your own projects, together with some real-world case studies for Docker implementations, from small-scale on-premises apps to very large-scale apps running on Azure.
Table of Contents (20 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
Index

Understanding the key Docker concepts


Docker is a very powerful but very simple application platform. You can get started with running your existing apps in Docker in just a few days, and be ready to move to production in a few days more. This book will take you through lots of examples of .NET Framework and .NET Core applications, running in Docker. You'll learn how to build, ship, and run applications in Docker and move on to advanced topics like solution design, security, administration, instrumentation, and continuous integration and continuous delivery (CI/CD).

To start with, you need to understand the core Docker concepts: images, registries, containers, and swarms--and understand how Docker actually runs.

The Docker service and Docker command-line

Docker runs as a background Windows service. This service manages all the running containers and exposes a REST API for consumers to work with containers and other Docker resources. The main consumer of that API is the Docker command-line tool, which is what I use for most of the code samples in this book.

The Docker REST API is public, and there are alternative management tools that are powered by the API, like Portainer (which is open source) and Docker Universal Control Plane (UCP) (which is a commercial product). The Docker CLI is very simple to use; you use commands like docker container run to run an application in a container and docker container rm to remove a container.

You can also configure the Docker API to be remotely accessible and configure your Docker CLI to connect to a remote service. This means you can manage a Docker host running in the cloud using Docker commands on your laptop. The setup to allow remote access should also include encryption, so your connection is secure--and in this chapter, I will show you an easy way to configure that.

When you have Docker running, you'll start by running containers from images.

Docker images

A Docker image is a complete application package. It contains one application and all of its dependencies, the language runtime, the application host, and the underlying operating system. Logically, the image is a single file, and it's a portable unit—you can share your application by pushing your image to a Docker registry. Anyone who has access can pull that image themselves and run your application in a container. It will behave in exactly the same way for them as it does for you.

Here's a concrete example. An ASP.NET WebForms app is going to run on Internet Information Services (IIS) in Windows Server. To package that application in Docker, you build an image that is based on Windows Server Core, add IIS, add ASP.NET, copy your application, and configure it as a website in IIS. You describe all these steps in a simple script called a Dockerfile, and you can use PowerShell or batch files for each step you need to perform.

You build the image by running docker image build. The input is the Dockerfile and any resources that need to be packaged into the image (like the web application content). The output is a Docker image. In this case, the image will have a logical size of about 11 GB, but 10 GB of that is the Windows Server Core image you're using as a base, and that image can be shared as the base across many other images (I will cover image layers and caching more in Chapter 4, Pushing and Pulling Images from Docker Registries).

The Docker image is like a snapshot of the filesystem for one version of your application. The image is static, and you distribute it using a registry.

Image registries

A registry is a storage server for Docker images. Registries can be public or private, and there are free public registries and commercial registry servers that allow fine-grained access control for images. Images are stored with a unique name within the registry. Anyone with access can upload an image by running docker image push and download an image by running docker image pull.

The most popular registries are the public ones hosted by Docker:

  • Docker Hub is the original registry, which has become hugely popular for open source projects in the Linux ecosystem. It has over 600,000 images stored and has hosted over 12 billion image pulls.
  • Docker Cloud is where you store images you build yourself, and you can configure images to be public or private. It's suitable for internal products, where you can limit access to the images. You can set up Docker Cloud to automatically build images from Dockerfiles stored in GitHub—currently, this is supported only for Linux-based images, but Windows support is coming soon.
  • Docker Store is where you get commercial software, pre-packaged as Docker images. Vendors are increasingly supporting Docker as a platform for their own applications, and you will find software from Microsoft, Oracle, HPE, and more on Docker Store.

In a typical workflow, you might build images as part of a CI pipeline and push them to a registry if all the tests pass. The image is then available for other users to run your application in a container.

Docker containers

A container is an instance of an application created from an image. The image contains the whole application stack, and it also specifies the process to start the application, so Docker knows what to do when you run a container. You can run multiple containers from the same image, and you can run containers in different ways (I describe them all in the next chapter). You start your application with docker container run, specifying the name of the image and your configuration options. Distribution is built into the Docker platform, so if you don't have a copy of the image on the host where you're trying to run the container, Docker will pull the image first. Then it starts the specified process, and your app is running in a container.

Containers don't need a fixed allocation of CPU or memory, and the processes for your application can use as much of the host's compute power as they need. You can run dozens of containers on modest hardware, and unless the applications all try and use a lot of CPU at the same time, they will happily run concurrently. You can also start containers with resource limits to restrict how much CPU and memory they have access to.

Docker provides the container runtime as well as image packaging and distribution. In a small environment and in development, you will manage individual containers on a single Docker host, which would be your laptop or a test server. When you move to production, you'll need high availability and the option to scale, and that comes with Docker swarm.

Docker swarm

Docker has the ability to run on a single machine or as one node in a cluster of machines all running Docker. This cluster is called a swarm, and you don't need to install anything extra to run in swarm mode. You install Docker on a set of machines, and on the first you run docker swarm init to initialize the swarm, and on the others you run docker swarm join to join the swarm.

I will cover swarm mode in depth in Chapter 7, Orchestrating Distributed Solutions with Docker Swarm, but it's important to know before you get much further that the Docker platform has high availability, scale, and resilience built in. Your Docker journey will hopefully lead you to production, where you'll need all these attributes.

In swarm mode Docker uses exactly the same artifacts, so you can run your app across 50 containers in a 20-node swarm, and the functionality will be the same as when you run it in a single container on your laptop. On the swarm, your app is more performant and tolerant of failure, and you'll be able to perform automated rolling updates to new versions.

Nodes in a swarm use secure encryption for all communication, using trusted certificates for each node. You can store application secrets as encrypted data in the swarm too, so database connection strings and API keys can be saved securely, and the swarm will deliver them only to containers that need them.

Docker is an established platform. It's new to Windows Server 2016, but it arrived on Windows after four years of releases on Linux. Docker is written in Go, which is a cross-platform language, and only a minority of code is specific to Windows. When you run Docker on Windows, you're running an application platform that has had years of successful production use.