Book Image

Mastering Distributed Tracing

By : Yuri Shkuro
Book Image

Mastering Distributed Tracing

By: Yuri Shkuro

Overview of this book

Mastering Distributed Tracing will equip you to operate and enhance your own tracing infrastructure. Through practical exercises and code examples, you will learn how end-to-end tracing can be used as a powerful application performance management and comprehension tool. The rise of Internet-scale companies, like Google and Amazon, ushered in a new era of distributed systems operating on thousands of nodes across multiple data centers. Microservices increased that complexity, often exponentially. It is harder to debug these systems, track down failures, detect bottlenecks, or even simply understand what is going on. Distributed tracing focuses on solving these problems for complex distributed systems. Today, tracing standards have developed and we have much faster systems, making instrumentation less intrusive and data more valuable. Yuri Shkuro, the creator of Jaeger, a popular open-source distributed tracing system, delivers end-to-end coverage of the field in Mastering Distributed Tracing. Review the history and theoretical foundations of tracing; solve the data gathering problem through code instrumentation, with open standards like OpenTracing, W3C Trace Context, and OpenCensus; and discuss the benefits and applications of a distributed tracing infrastructure for understanding, and profiling, complex systems.
Table of Contents (21 chapters)
Mastering Distributed Tracing
Contributors
Preface
Other Books You May Enjoy
Leave a review - let other readers know what you think
15
Afterword
Index

Feature extraction exercise


In this code example, we will build an Apache Flink job called SpanCountJob for basic feature extraction from traces. Apache Flink is a big data, real-time streaming framework that is well suited to processing traces as they are being collected by the tracing backend. Other streaming frameworks, like Apache Spark or Apache Storm, can be used in a similar way. All these frameworks work well with the messaging queue infrastructure; we will be using Apache Kafka for that.

Since version 1.8, the Jaeger backend supports Kafka as an intermediate transport for spans received by the collectors. The jaeger-ingester component reads the spans from a Kafka stream and writes them to the storage backend, in our case Elasticsearch. Figure 12.2 shows the overall architecture of the exercise. By using this deployment mode of Jaeger, we are getting traces fed into Elasticsearch so that they can be viewed individually using the Jaeger UI, and they are also processed by Apache Flink...