Book Image

Getting Started with Containerization

By : Dr. Gabriel N. Schenker, Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu
Book Image

Getting Started with Containerization

By: Dr. Gabriel N. Schenker, Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu

Overview of this book

Kubernetes is an open source orchestration platform for managing containers in a cluster environment. This Learning Path introduces you to the world of containerization, in addition to providing you with an overview of Docker fundamentals. As you progress, you will be able to understand how Kubernetes works with containers. Starting with creating Kubernetes clusters and running applications with proper authentication and authorization, you'll learn how to create high-availability Kubernetes clusters on Amazon Web Services (AWS), and also learn how to use kubeconfig to manage different clusters. Whether it is learning about Docker containers and Docker Compose, or building a continuous delivery pipeline for your application, this Learning Path will equip you with all the right tools and techniques to get started with containerization. By the end of this Learning Path, you will have gained hands-on experience of working with Docker containers and orchestrators, including SwarmKit and Kubernetes. This Learning Path includes content from the following Packt products: • Kubernetes Cookbook - Second Edition by Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu • Learn Docker - Fundamentals of Docker 18.x by Gabriel N. Schenker
Table of Contents (25 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Deploying a first application


We have created a few Docker Swarms on various platforms. Once created, a swarm behaves the same way on any platform. The way we deploy and update applications on a swarm is not platform-dependent. It has been one of Docker's main goals to avoid a vendor lock-in when using a swarm. Swarm-ready applications can be effortlessly migrated from, say, a swarm running on-premise to a cloud based swarm. It is even technically possible to run part of a swarm on-premise and another part in the cloud. It works, yet one has of course to consider possible side effects due to the higher latency between nodes in geographically distant areas.

Now that we have a highly available Docker Swarm up and running, it is time to run some workloads on it. I'm using a local swarm created with Docker Machine. We'll start by first creating a single service. For this we need to SSH into one of the manager nodes. I select node-1:

$ docker-machine ssh node-1

Creating a service

A service can be...