Book Image

Mastering Distributed Tracing

By : Yuri Shkuro
Book Image

Mastering Distributed Tracing

By: Yuri Shkuro

Overview of this book

Mastering Distributed Tracing will equip you to operate and enhance your own tracing infrastructure. Through practical exercises and code examples, you will learn how end-to-end tracing can be used as a powerful application performance management and comprehension tool. The rise of Internet-scale companies, like Google and Amazon, ushered in a new era of distributed systems operating on thousands of nodes across multiple data centers. Microservices increased that complexity, often exponentially. It is harder to debug these systems, track down failures, detect bottlenecks, or even simply understand what is going on. Distributed tracing focuses on solving these problems for complex distributed systems. Today, tracing standards have developed and we have much faster systems, making instrumentation less intrusive and data more valuable. Yuri Shkuro, the creator of Jaeger, a popular open-source distributed tracing system, delivers end-to-end coverage of the field in Mastering Distributed Tracing. Review the history and theoretical foundations of tracing; solve the data gathering problem through code instrumentation, with open standards like OpenTracing, W3C Trace Context, and OpenCensus; and discuss the benefits and applications of a distributed tracing infrastructure for understanding, and profiling, complex systems.
Table of Contents (21 chapters)
Mastering Distributed Tracing
Contributors
Preface
Other Books You May Enjoy
Leave a review - let other readers know what you think
15
Afterword
Index

Historical analysis


So far, we have only talked about real-time analysis of tracing data. Occasionally, it may be useful to run the same analysis over historical trace data, assuming it is within your data store's retention periods. As an example, if we come up with a new type of aggregation, the streaming job we discussed earlier will only start generating it for new data, so we would have no basis for comparison.

Fortunately, the big data frameworks are very flexible and provide a lot of ways to source the data for analysis, including reading it from databases, or HDFS, or other types of warm and cold storage. In particular, Flink's documentation says it is fully compatible with Hadoop MapReduce APIs and can use Hadoop input formats as a data source. So, we can potentially use the same job we implemented here and just give it a different data source in order to process historical datasets .

While these integrations are possible, as of the time of writing, there are not very many open source...