Book Image

Getting Started with Containerization

By : Dr. Gabriel N. Schenker, Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu
Book Image

Getting Started with Containerization

By: Dr. Gabriel N. Schenker, Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu

Overview of this book

Kubernetes is an open source orchestration platform for managing containers in a cluster environment. This Learning Path introduces you to the world of containerization, in addition to providing you with an overview of Docker fundamentals. As you progress, you will be able to understand how Kubernetes works with containers. Starting with creating Kubernetes clusters and running applications with proper authentication and authorization, you'll learn how to create high-availability Kubernetes clusters on Amazon Web Services (AWS), and also learn how to use kubeconfig to manage different clusters. Whether it is learning about Docker containers and Docker Compose, or building a continuous delivery pipeline for your application, this Learning Path will equip you with all the right tools and techniques to get started with containerization. By the end of this Learning Path, you will have gained hands-on experience of working with Docker containers and orchestrators, including SwarmKit and Kubernetes. This Learning Path includes content from the following Packt products: • Kubernetes Cookbook - Second Edition by Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu • Learn Docker - Fundamentals of Docker 18.x by Gabriel N. Schenker
Table of Contents (25 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Scaling a service


Now, let's, for a moment, assume that our sample application has been live on the web and become very successful. Loads of people want to see our cute animal images. So now we're facing a problem since our application has started to slow down. To counteract this problem, we want to run multiple instances of the web service. With Docker Compose, this is readily done.

Running more instances is also called scaling up. We can use this tool to scale our web service up to, say, three instances:

$ docker-compose up --scale web=3

If we do this, we are in for a surprise. The output will look similar to the following screenshot:

Output of docker-compose --scale

The second and third instances of the web service fail to start. The error message tells us why: we cannot use the same host port more than once. When instances 2 and 3 try to start, Docker realizes that port 3000 is already taken by the first instance. What can we do? Well, we can just let Docker decide which host port to use...