Book Image

Learning Windows Server Containers

Book Image

Learning Windows Server Containers

Overview of this book

Windows Server Containers are independent, isolated, manageable and portable application environments which are light weight and shippable. Decomposing your application into smaller manageable components or MicroServices helps in building scalable and distributed application environments. Windows Server Containers have a significant impact on application developers, development operations (DevOps) and infrastructure management teams. Applications can be built, shipped and deployed in a fast-paced manner on an easily manageable and updatable environment. Learning Windows Server Containers teaches you to build simple to advanced production grade container based application using Asp.Net Core, Visual Studio, Azure, Docker and PowerShell technologies. The book teaches you to build and deploy simple web applications as Windows and Hyper-V containers on Windows 10 and Windows Server 2016 on Azure. You will learn to build on top of Windows Container Base OS Images, integrate with existing images from Docker Hub, create custom images and publish to Hub. You will also learn to work with storage containers built using Volumes and SQL Server as container, create and configure custom networks, integrate with Redis Cache containers, configure continuous integration and deployment pipelines using VSTS and Git Repository. Further you can also learn to manage resources for a container, setting up monitoring and diagnostics, deploy composite container environments using Docker Compose on Windows and manage container clusters using Docker Swarm. The last chapter of the book focuses on building applications using Microsoft’s new and thinnest server platform – Nano Servers.
Table of Contents (19 chapters)
Credits
Foreword
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Cluster management


The microservices architecture influences application packaging and transferring across hosts/environments at a faster pace, as a result enterprises adapting to containerization face problems with increasing number of containers. Containers created new administration challenges for enterprises, such as managing a group of containers or a cluster of container hosts. A container cluster is a group of nodes, each node is a container host and it has containers running inside. It is important for enterprises/teams to facilitate manage the containers hosts and facilitate communication channels across container hosts.

Cluster management tools help operations teams or administrators to manage containers and container hosts from single management consoles. They assist in moving containers from one host to another, control resource management (CPU, memory, and network allocation), executing workflows, scheduling jobs/tasks (jobs/tasks is a set of steps to be executed on a cluster(s)), monitoring, reliability, scalability, and so on. Now that you know what clusters are, let's see the variety of offerings available on the market today and the core values offered by each one. Cluster management will be discussed in more detail in following chapters.

Docker Swarm

Swarm is a native cluster management solution from Docker. Swarm helps you manage a pool of containers hosts as a single unit. Swarm is delivered as an image called Docker Swarm image. Docker Swarm image should be installed on a node and the container hosts should be configured with TCP ports and TLS certificates to connect to the swarm. Swarm provides an API layer to access the container or cluster services so that developers can build their own management interface. Swarm follows Plug and Play (PnP) architecture, so most of the components of swarm can be replaced as per needs. A few of the cluster management capabilities provided by swarm are:

  • Discovery services for discovering images using public/private hosts or event list of IPs
  • Docker Compose, which can be used for orchestrating multicontainer deployment in a single file
  • Advanced scheduling for strategically placing containers on selective nodes depending on the ranking/priority and filtering nodes

Kubernetes

Kubernetes is a cluster manager from Google. Google was the first company to introduce the concept of container clusters. Kubernetes has many amazing features for cluster management. A few of them are:

  • Pods: Kubernetes pods are used for logically grouping containers. Pods are scheduled and managed as independent units. Pods can also share data and communication channels. On the down side if one container in a pod dies, the whole pod dies. This might be valid in the cases of these containers being interdependent or closely coupled.
  • Replication controllers: Replication controllers ensure reliability across hosts. For example, let's say you always want three pod units/pods of the backend service, replication controllers ensure that three pods are running by checking their health on a regular basis. If any pod doesn't respond, replication controllers immediately spin up another instance of pod and therefore ensure reliability and availability.
  • Labels: Labels are used to collectively name a set of pods so that teams can operate them as collective units. Naming can be done using environments such as dev, staging and production, or using geographical locations. Replication controllers can be used to collectively migrate collection of pods across nodes, grouping them by labels.
  • Service proxy: Within a huge container cluster, you would need a neat and clean mechanism to resolve pods/container hosts using labels or name queries. Service proxy helps you resolve requests to a single logical set of pods using label-driven selectors. In the future you might see custom proxies that resolve to a pod based on custom configuration. For example, if you want to serve your premium customers using one set of frontend pods that are configured for quick response times and basic customers using another set of frontend pods, you can configure the environment accordingly and route traffic based on smart domain driven decisions.

DC/OS

DC/OS is another distributed kernel OS built using Apache Mesos for cluster management. Apache Mesos is a cluster manager that integrates seamlessly with container technologies such as Docker for scheduling and fault tolerance. Apache Mesos is in fact a generic cluster management tool that is also used in big data environments such as Hadoop and Cassandra. It also gels well with several batch scheduling, PaaS platforms, long-running applications, and mass data storage systems. It provides a web dashboard for cluster management.

Apache Mesos's complex architecture, configuration, and management makes it difficult to adapt directly. DC/OS makes it easy and significantly straightforward. DC/OS runs on top of Mesos and does the job of what kernel does on your laptop OS, but over a cluster of machines. It provides services such as scheduling, DNS, service discovery, package management, and fault tolerance over a collection of CPUs, RAM, and network resources. DC/OS is supported by a wealth of developer community, rich diagnostics support, and management tools using GUI, CLI, and API.

ACS, which was discussed previously, has a reference implementation for DC/OS. Within a few clicks Azure makes it easy to build a DC/OS cluster on the cloud and makes it ready to deploy applications. The same sets of services are provided for on premise data centers or private clouds using Azure Stack (Azure Stack is an OS provided by Microsoft for managing private clouds). You also get an additional benefit of integrating with other rich sets of services provided by Azure for increasing the agility and scalability.

Two other cluster managers that are not discussed here are Amazon EC2 Container Service, which is built on top of Amazon EC2 instances and uses shared state scheduling services, and CoreOS Tectonic, which is Kubernetes as a service on AWS (Amazon's cloud offering).