Book Image

Hands-On Docker for Microservices with Python

By : Jaime Buelta
Book Image

Hands-On Docker for Microservices with Python

By: Jaime Buelta

Overview of this book

Microservices architecture helps create complex systems with multiple, interconnected services that can be maintained by independent teams working in parallel. This book guides you on how to develop these complex systems with the help of containers. You’ll start by learning to design an efficient strategy for migrating a legacy monolithic system to microservices. You’ll build a RESTful microservice with Python and learn how to encapsulate the code for the services into a container using Docker. While developing the services, you’ll understand how to use tools such as GitHub and Travis CI to ensure continuous delivery (CD) and continuous integration (CI). As the systems become complex and grow in size, you’ll be introduced to Kubernetes and explore how to orchestrate a system of containers while managing multiple services. Next, you’ll configure Kubernetes clusters for production-ready environments and secure them for reliable deployments. In the concluding chapters, you’ll learn how to detect and debug critical problems with the help of logs and metrics. Finally, you’ll discover a variety of strategies for working with multiple teams dealing with different microservices for effective collaboration. By the end of this book, you’ll be able to build production-grade microservices as well as orchestrate a complex system of services using containers.
Table of Contents (19 chapters)
Free Chapter
1
Section 1: Introduction to Microservices
3
Section 2: Designing and Operating a Single Service – Creating a Docker Container
7
Section 3:Working with Multiple Services – Operating the System through Kubernetes
13
Section 4: Production-Ready System – Making It Work in Real-Life Environments

Monitoring Logs and Metrics

In real-life operations, the ability to quickly detect and debug a problem is critical. In this chapter, we will discuss the two most important tools we can use to discover what's happening in a production cluster processing a high number of requests. The first tool is logs, which help us to understand what's happening within a single request, while the other tool is metrics, which categorizes the aggregated performance of the system.

The following topics will be covered in this chapter:

  • Observability of a live system
  • Setting up logs
  • Detecting problems through logs
  • Setting up metrics
  • Being proactive

By the end of this chapter, you'll know how to add logs so that they are available to detect problems and how to add and plot metrics and understand the differences between both of them.