Book Image

Deployment with Docker

By : Srdjan Grubor
Book Image

Deployment with Docker

By: Srdjan Grubor

Overview of this book

Deploying Docker into production is considered to be one of the major pain points in developing large-scale infrastructures, and the documentation available online leaves a lot to be desired. With this book, you will learn everything you wanted to know to effectively scale your deployments globally and build a resilient, scalable, and containerized cloud platform for your own use. The book starts by introducing you to the containerization ecosystem with some concrete and easy-to-digest examples; after that, you will delve into examples of launching multiple instances of the same container. From there, you will cover orchestration, multi-node setups, volumes, and almost every relevant component of this new approach to deploying services. Using intertwined approaches, the book will cover battle-tested tooling, or issues likely to be encountered in real-world scenarios, in detail. You will also learn about the other supporting components required for a true PaaS deployment and discover common options to tie the whole infrastructure together. At the end of the book, you learn to build a small, but functional, PaaS (to appreciate the power of the containerized service approach) and continue to explore real-world approaches to implementing even larger global-scale services.
Table of Contents (18 chapters)
Title Page
About the Author
About the Reviewer
Customer Feedback

Limiting service resources

So far, we have not really spent any time talking about service isolation with regard to the resources available to the services, but it is a very important topic to cover. Without limiting resources, a malicious or misbehaving service could be liable to bring the whole cluster down, depending on the severity, so great care needs to be taken to specify exactly what allowance individual service tasks should use.

The generally accepted strategy for handling cluster resources is the following:

  • Any resource that may cause errors or failures to other services if used beyond intended values is highly recommended to be limited on the service level. This is usually the RAM allocation, but may include CPU or others.
  • Any resources, specifically the hardware ones, for which you have an external limit should also be limited for Docker containers too (e.g. you are only allowed to use a specific portion of a 1-Gbps NAS connection).
  • Anything that needs to run on a specific device...