Book Image

Mastering Distributed Tracing

By : Yuri Shkuro
Book Image

Mastering Distributed Tracing

By: Yuri Shkuro

Overview of this book

Mastering Distributed Tracing will equip you to operate and enhance your own tracing infrastructure. Through practical exercises and code examples, you will learn how end-to-end tracing can be used as a powerful application performance management and comprehension tool. The rise of Internet-scale companies, like Google and Amazon, ushered in a new era of distributed systems operating on thousands of nodes across multiple data centers. Microservices increased that complexity, often exponentially. It is harder to debug these systems, track down failures, detect bottlenecks, or even simply understand what is going on. Distributed tracing focuses on solving these problems for complex distributed systems. Today, tracing standards have developed and we have much faster systems, making instrumentation less intrusive and data more valuable. Yuri Shkuro, the creator of Jaeger, a popular open-source distributed tracing system, delivers end-to-end coverage of the field in Mastering Distributed Tracing. Review the history and theoretical foundations of tracing; solve the data gathering problem through code instrumentation, with open standards like OpenTracing, W3C Trace Context, and OpenCensus; and discuss the benefits and applications of a distributed tracing infrastructure for understanding, and profiling, complex systems.
Table of Contents (21 chapters)
Mastering Distributed Tracing
Contributors
Preface
Other Books You May Enjoy
Leave a review - let other readers know what you think
15
Afterword
Index

Trace models


In Figure 3.4, we saw a component called "Collection/Normalization." The purpose of this component is to receive tracing data from the trace points in the applications and convert it to some normalized trace model, before saving it in the trace storage. Aside from the usual architectural advantages of having a façade on top of the trace storage, the normalization is especially important when we are faced with the diversity of instrumentations. It is quite common for many production environments to be using numerous versions of instrumentation libraries, from very recent ones to some that are several years old. It is also common for those versions to capture trace data in very different formats and models, both physical and conceptual. The normalization layer acts as an equalizer and translates all those varieties into a single logical trace model, which can later be uniformly processed by the trace visualization and analysis tools. In this section, we will focus on two of the...