Book Image

The Complete Rust Programming Reference Guide

By : Rahul Sharma, Vesa Kaihlavirta, Claus Matzinger
Book Image

The Complete Rust Programming Reference Guide

By: Rahul Sharma, Vesa Kaihlavirta, Claus Matzinger

Overview of this book

Rust is a powerful language with a rare combination of safety, speed, and zero-cost abstractions. This Learning Path is filled with clear and simple explanations of its features along with real-world examples, demonstrating how you can build robust, scalable, and reliable programs. You’ll get started with an introduction to Rust data structures, algorithms, and essential language constructs. Next, you will understand how to store data using linked lists, arrays, stacks, and queues. You’ll also learn to implement sorting and searching algorithms, such as Brute Force algorithms, Greedy algorithms, Dynamic Programming, and Backtracking. As you progress, you’ll pick up on using Rust for systems programming, network programming, and the web. You’ll then move on to discover a variety of techniques, right from writing memory-safe code, to building idiomatic Rust libraries, and even advanced macros. By the end of this Learning Path, you’ll be able to implement Rust for enterprise projects, writing better tests and documentation, designing for performance, and creating idiomatic Rust code. This Learning Path includes content from the following Packt products: • Mastering Rust - Second Edition by Rahul Sharma and Vesa Kaihlavirta • Hands-On Data Structures and Algorithms with Rust by Claus Matzinger
Table of Contents (29 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Types and memory


In this section, we'll touch on some aspects and low-level details of types in programming languages that are important to know if you are someone writing systems software and care about performance.

Memory alignment

This is one of those aspects of memory management that you will rarely have to care about unless performance is a strict requirement. Due to data access latency between memory and the processor, when the processor accesses data from memory, it does so in a chunk and not byte by byte. This is to help reduce the number of memory accesses. This chunk size is called the memory access granularity of the CPU. Usually, the chunk sizes are one word (32 bit), two word, four word, and so on, and they depend on the target architecture. Due to this access granularity, it is desired that the data resides in memory which is aligned to a multiple of the word size. If that is not the case, then the CPU has to read and then perform left or right shifts on the data bits and discard...