Book Image

gRPC Go for Professionals

By : Clément Jean
Book Image

gRPC Go for Professionals

By: Clément Jean

Overview of this book

In recent years, the popularity of microservice architecture has surged, bringing forth a new set of requirements. Among these, efficient communication between the different services takes center stage, and that's where gRPC shines. This book will take you through creating gRPC servers and clients in an efficient, secure, and scalable way. However, communication is just one aspect of microservices, so this book goes beyond that to show you how to deploy your application on Kubernetes and configure other tools that are needed for making your application more resilient. With these tools at your disposal, you’ll be ready to get started with using gRPC in a microservice architecture. In gRPC Go for Professionals, you'll explore core concepts such as message transmission and the role of Protobuf in serialization and deserialization. Through a step-by-step implementation of a TODO list API, you’ll see the different features of gRPC in action. You’ll then learn different approaches for testing your services and debugging your API endpoints. Finally, you’ll get to grips with deploying the application services via Docker images and Kubernetes.
Table of Contents (13 chapters)
10
Epilogue

Distributing requests with load balancing

Load balancing in general is a complex topic. There are many ways of implementing it. gRPC provides, by default, client-side load balancing. This is a less popular choice than look-aside or proxy load-balancing because it involves “knowing” all the servers’ addresses and having complex logic in the client, but it has the advantage of directly talking to the servers and thus enables lower-latency communication. If you want to know more about how to choose the correct load balancing for your use case, check this documentation: https://grpc.io/blog/grpc-load-balancing/.

To see the power of client-side load balancing, we will deploy three instances of our server to Kubernetes and let the client balance the load across them. I created the Docker images beforehand so that we do not have to go through all of that here. If you are interested in checking the Docker files, you can see them both in the server and client folders...