Book Image

Hands-On Concurrency with Rust

By : Brian L. Troutwine
Book Image

Hands-On Concurrency with Rust

By: Brian L. Troutwine

Overview of this book

Most programming languages can really complicate things, especially with regard to unsafe memory access. The burden on you, the programmer, lies across two domains: understanding the modern machine and your language's pain-points. This book will teach you to how to manage program performance on modern machines and build fast, memory-safe, and concurrent software in Rust. It starts with the fundamentals of Rust and discusses machine architecture concepts. You will be taken through ways to measure and improve the performance of Rust code systematically and how to write collections with confidence. You will learn about the Sync and Send traits applied to threads, and coordinate thread execution with locks, atomic primitives, data-parallelism, and more. The book will show you how to efficiently embed Rust in C++ code and explore the functionalities of various crates for multithreaded applications. It explores implementations in depth. You will know how a mutex works and build several yourself. You will master radically different approaches that exist in the ecosystem for structuring and managing high-scale systems. By the end of the book, you will feel comfortable with designing safe, consistent, parallel, and high-performance applications in Rust.
Table of Contents (18 chapters)
Title Page
Copyright and Credits
Dedication
Packt Upsell
Contributors
Preface
Index

Hopper—an MPSC specialization


As mentioned at the tail end of the last chapter, you'd need a fairly specialized use-case to consider not using stdlib's MPSC. In the rest of this chapter, we'll discuss such a use-case and the implementation of a library meant to fill it.

The problem

Recall back to the last chapter, where the role-threads in telem communicated with one another over MPSC channels. Recall also that telem was a quick version of the cernan (https://crates.io/crates/cernan) project, which fulfills basically the same role but over many more ingress protocols, egress protocols, and with the sharp edges worn down. One of the key design goals of cernan is that if it receives your data, it will deliver it downstream at least once. This implies that, for supporting ingress protocols, cernan must know, along the full length of the configured routing topology, that there is sufficient space to accept a new event, whether it's a piece of telemetry, a raw byte buffer, or a log line. Now, that...