Book Image

Mastering Distributed Tracing

By : Yuri Shkuro
Book Image

Mastering Distributed Tracing

By: Yuri Shkuro

Overview of this book

Mastering Distributed Tracing will equip you to operate and enhance your own tracing infrastructure. Through practical exercises and code examples, you will learn how end-to-end tracing can be used as a powerful application performance management and comprehension tool. The rise of Internet-scale companies, like Google and Amazon, ushered in a new era of distributed systems operating on thousands of nodes across multiple data centers. Microservices increased that complexity, often exponentially. It is harder to debug these systems, track down failures, detect bottlenecks, or even simply understand what is going on. Distributed tracing focuses on solving these problems for complex distributed systems. Today, tracing standards have developed and we have much faster systems, making instrumentation less intrusive and data more valuable. Yuri Shkuro, the creator of Jaeger, a popular open-source distributed tracing system, delivers end-to-end coverage of the field in Mastering Distributed Tracing. Review the history and theoretical foundations of tracing; solve the data gathering problem through code instrumentation, with open standards like OpenTracing, W3C Trace Context, and OpenCensus; and discuss the benefits and applications of a distributed tracing infrastructure for understanding, and profiling, complex systems.
Table of Contents (21 chapters)
Mastering Distributed Tracing
Contributors
Preface
Other Books You May Enjoy
Leave a review - let other readers know what you think
15
Afterword
Index

Identifying sources of latency


So far, we have not discussed the performance characteristics of the HotROD application. If we refer to Figure 2.6, we can easily make the following conclusions:

  1. The call to the customer service is on the critical path because no other work can be done until we get back the customer data that includes the location to which we need to dispatch the car.

  2. The driver service retrieves N nearest drivers given the customer's location and then queries Redis for each driver's data in a sequence, which can be seen in the staircase pattern of Redis GetDriver spans. If these operations can be done in parallel, the overall latency can be reduced by almost 200ms.

  3. The calls to the route service are not sequential, but not fully parallel either. We can see that, at most, three requests can be in progress, and as soon as one of them ends, another request starts. This behavior is typical when we use a fixed-size executor pool.

Figure 2.11: Recognizing the sources of latency. The...