Book Image

Hands-On Concurrency with Rust

By : Brian L. Troutwine
Book Image

Hands-On Concurrency with Rust

By: Brian L. Troutwine

Overview of this book

Most programming languages can really complicate things, especially with regard to unsafe memory access. The burden on you, the programmer, lies across two domains: understanding the modern machine and your language's pain-points. This book will teach you to how to manage program performance on modern machines and build fast, memory-safe, and concurrent software in Rust. It starts with the fundamentals of Rust and discusses machine architecture concepts. You will be taken through ways to measure and improve the performance of Rust code systematically and how to write collections with confidence. You will learn about the Sync and Send traits applied to threads, and coordinate thread execution with locks, atomic primitives, data-parallelism, and more. The book will show you how to efficiently embed Rust in C++ code and explore the functionalities of various crates for multithreaded applications. It explores implementations in depth. You will know how a mutex works and build several yourself. You will master radically different approaches that exist in the ecosystem for structuring and managing high-scale systems. By the end of the book, you will feel comfortable with designing safe, consistent, parallel, and high-performance applications in Rust.
Table of Contents (18 chapters)
Title Page
Copyright and Credits
Dedication
Packt Upsell
Contributors
Preface
Index

Blocking until the gang's all here - barrier


A barrier is a synchronization device that blocks threads until such time that a predefined number of threads have waited on the same barrier.  When a barrier's waiting threads wake up, one is declared leader—discoverable by inspecting the BarrierWaitResult—but this confers no scheduling advantage. A barrier becomes useful when you wish to delay threads behind an unsafe initialization of some resource—say a C library's internals that have no thread-safety at startup, or have a need to force participating threads to start a critical section at roughly the same time. The latter is the broader category, in your author's experience. When programming with atomic variables, you'll run into situations where a barrier will be useful. Also, consider for a second writings multi-threaded code for low-power devices. There are two strategies possible these days for power management: scaling the CPU to meet requirements, adjusting the runtime of your program...