Book Image

Python Microservices Development

Book Image

Python Microservices Development

Overview of this book

We often deploy our web applications into the cloud, and our code needs to interact with many third-party services. An efficient way to build applications to do this is through microservices architecture. But, in practice, it's hard to get this right due to the complexity of all the pieces interacting with each other. This book will teach you how to overcome these issues and craft applications that are built as small standard units, using all the proven best practices and avoiding the usual traps. It's a practical book: you’ll build everything using Python 3 and its amazing tooling ecosystem. You will understand the principles of TDD and apply them. You will use Flask, Tox, and other tools to build your services using best practices. You will learn how to secure connections between services, and how to script Nginx using Lua to build web application firewall features such as rate limiting. You will also familiarize yourself with Docker’s role in microservices, and use Docker containers, CoreOS, and Amazon Web Services to deploy your services. This book will take you on a journey, ending with the creation of a complete Python application based on microservices. By the end of the book, you will be well versed with the fundamentals of building, designing, testing, and deploying your Python microservices.
Table of Contents (20 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
Introduction

Performance metrics


When a microservice eats 100% of server memory, bad things will happen. Some Linux distributions will just kill the greedy process using the infamous out-of-memory killer (oomkiller).

Using too much RAM can happen for several reasons:

  • The microservice has a memory leak and steadily grows, sometimes at a very fast pace. It's very common in Python C extensions to forget to dereference an object and leak it on every call.
  • The code uses memory without care. For example, a dictionary that's used as an ad hoc memory cache can grow indefinitely over the days unless there's an upper limit by design.
  • There's simply not enough memory allocated to the service--the server is getting too many requests or is too weak for the job.

It's important to be able to track memory usage over time to find out about these issues before it impacts users.

Reaching 100% of the CPU in production is also problematic. While it's desirable to maximize the CPU usage, if the server is too busy when new requests...