Book Image

The Docker Workshop

By : Vincent Sesto, Onur Yılmaz, Sathsara Sarathchandra, Aric Renzo, Engy Fouda
5 (1)
Book Image

The Docker Workshop

5 (1)
By: Vincent Sesto, Onur Yılmaz, Sathsara Sarathchandra, Aric Renzo, Engy Fouda

Overview of this book

No doubt Docker Containers are the future of highly-scalable software systems and have cost and runtime efficient supporting infrastructure. But learning it might look complex as it comes with many technicalities. This is where The Docker Workshop will help you. Through this workshop, you’ll quickly learn how to work with containers and Docker with the help of practical activities.? The workshop starts with Docker containers, enabling you to understand how it works. You’ll run third party Docker images and also create your own images using Dockerfiles and multi-stage Dockerfiles. Next, you’ll create environments for Docker images, and expedite your deployment and testing process with Continuous Integration. Moving ahead, you’ll tap into interesting topics and learn how to implement production-ready environments using Docker Swarm. You’ll also apply best practices to secure Docker images and to ensure that production environments are running at maximum capacity. Towards the end, you’ll gather skills to successfully move Docker from development to testing, and then into production. While doing so, you’ll learn how to troubleshoot issues, clear up resource bottlenecks and optimize the performance of services. By the end of this workshop, you’ll be able to utilize Docker containers in real-world use cases.
Table of Contents (17 chapters)
Preface

Managing Container Memory Resources

Just as we can monitor and control the CPU resources our container is using on our system, we can also do the same with the memory being used. As with CPU, the running container is able to use all of the host's memory with the default settings provided by Docker, and in some cases can cause the system to become unstable if it is not limited. If the host systems kernel detects that there is not enough memory available, it will show an out-of-memory exception and start to kill off the processes on the system to help free up memory.

The good news is that the Docker daemon has a high priority on your system, so the kernel will first kill off running containers before it stops the Docker daemon from running. This means that your system should be able to recover if the high memory usage is being caused by a container application.

Note

If your running containers are being shut down, you will also need to make sure you have tested your application...