The basic concept of distributed tracing appears to be very straightforward:
Instrumentation is inserted into chosen points of the program's code (trace points) and produces profiling data when executed
The profiling data is collected in a central location, correlated to the specific execution (request), arranged in the causality order, and combined into a trace that can be visualized or further analyzed
Of course, things are rarely as simple as they appear. There are multiple design decisions taken by the existing tracing systems, affecting how these systems perform, how difficult they are to integrate into existing distributed applications, and even what kinds of problems they can or cannot help to solve.
The ability to collect and correlate profiling data for a given execution or request initiator, and identify causally-related activities, is arguably the most distinctive feature of distributed tracing, setting it apart from all other profiling and observability tools...