Book Image

Hands-On Data Structures and Algorithms with Rust

By : Claus Matzinger
Book Image

Hands-On Data Structures and Algorithms with Rust

By: Claus Matzinger

Overview of this book

Rust has come a long way and is now utilized in several contexts. Its key strengths are its software infrastructure and resource-constrained applications, including desktop applications, servers, and performance-critical applications, not forgetting its importance in systems' programming. This book will be your guide as it takes you through implementing classic data structures and algorithms in Rust, helping you to get up and running as a confident Rust programmer. The book begins with an introduction to Rust data structures and algorithms, while also covering essential language constructs. You will learn how to store data using linked lists, arrays, stacks, and queues. You will also learn how to implement sorting and searching algorithms. You will learn how to attain high performance by implementing algorithms to string data types and implement hash structures in algorithm design. The book will examine algorithm analysis, including Brute Force algorithms, Greedy algorithms, Divide and Conquer algorithms, Dynamic Programming, and Backtracking. By the end of the book, you will have learned how to build components that are easy to understand, debug, and use in different applications.
Table of Contents (15 chapters)

Chapter 8

Why estimate runtime complexity over something such as the number of statements?

Runtime complexity is more about the projected growth alongside the main input parameter. In a way, it is counting the number of statements and you would likely arrive at the same conclusion. The statements that are being counted are the subset that matters most.

How does runtime complexity relate to math functions?

In two ways: mathematical functions can be described the same way as functions in programming, since they rest on the same fundamental construct; and math functions are used to express the runtime complexity itself, in particular the logarithmic and exponential functions.

Is the complexity class that is typically provided the best or worst case?

The worst case, since this will be the slowest/most inefficient case.

Why are loops important in estimating complexity?

Loops are great...