Book Image

Learn Docker - Fundamentals of Docker 18.x

By : Dr. Gabriel N. Schenker
Book Image

Learn Docker - Fundamentals of Docker 18.x

By: Dr. Gabriel N. Schenker

Overview of this book

Docker containers have revolutionized the software supply chain in small and big enterprises. Never before has a new technology so rapidly penetrated the top 500 enterprises worldwide. Companies that embrace containers and containerize their traditional mission-critical applications have reported savings of at least 50% in total maintenance cost and a reduction of 90% (or more) of the time required to deploy new versions of those applications. Furthermore they are benefitting from increased security just by using containers as opposed to running applications outside containers. This book starts from scratch, introducing you to Docker fundamentals and setting up an environment to work with it. Then we delve into concepts such as Docker containers, Docker images, Docker Compose, and so on. We will also cover the concepts of deployment, orchestration, networking, and security. Furthermore, we explain Docker functionalities on public clouds such as AWS. By the end of this book, you will have hands-on experience working with Docker containers and orchestrators such as SwarmKit and Kubernetes.
Table of Contents (21 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Chapter 12


  1. The Kubernetes master is responsible for managing the cluster. All requests to create objects, the scheduling of pods, the managing of ReplicaSets, and more is happening on the master. The master does not run application workload in a production or production-like cluster.
  2. On each worker node, we have the kubelet, the proxy, and a container runtime.
  3. The answer is Yes. You cannot run standalone containers on a Kubernetes cluster. Pods are the atomic unit of deployment in such a cluster.
  4. All containers running inside a pod share the same Linux kernel network namespace. Thus, all processes running inside those containers can communicate with each other through localhost in a similar way that processes or applications directly running on the host can communicate with each other through localhost.
  5. The pause container's sole role is to reserve the namespaces of the pod for the containers that run in the pod.
  6. This is a bad idea since all containers of a pod are co-located, which means they run on the same cluster node. But the different component of the application (that is, webinventory, and db) usually have very different requirements in regards to scalability or resource consumption. The web component might need to be scaled up and down depending on the traffic and the db component in turn has special requirements on storage that the others don't have. If we do run every component in its own pod, we are much more flexible in this regard. 
  7. We need a mechanism to run multiple instances of a pod in a cluster and make sure that the actual number of pods running always corresponds to the desired number, even when individual pods crash or disappear due to network partition or cluster node failures. The ReplicaSet is this mechanism that provides scalability and self-healing to any application service. 
  8. We need deployment objects whenever we want to update an application service in a Kubernetes cluster without causing downtime to the service. Deployment objects add rolling update and rollback capabilities to ReplicaSets.
  1. Kubernetes service objects are used to make application services participate in service discovery. They provide a stable endpoint to a set of pods (normally governed by a ReplicaSet or a deployment). Kube services are abstractions which define a logical set of pods and a policy on how to access them. There are four types of Kube services:
    • ClusterIP: Exposes the service on an IP address only accessible from inside the cluster; this is a virtual IP (VIP)
    • NodePort: Publishes a port in the range 30,000–32767 on every cluster node
    • LoadBalancer: This type exposes the application service externally using a cloud provider’s load balancer such as ELB on AWS
    • ExternalName: Used when you need to define a proxy for a cluster external service such as a database