Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying The C++ Programmer's Mindset
  • Table Of Contents Toc
  • Feedback & Rating feedback
The C++ Programmer's Mindset

The C++ Programmer's Mindset

By : Sam Morley
close
close
The C++ Programmer's Mindset

The C++ Programmer's Mindset

By: Sam Morley

Overview of this book

Solve complex problems in C++ by learning how to think like a computer scientist. This book introduces computational thinking—a framework for solving problems using decomposition, abstraction, and pattern recognition—and shows you how to apply it using modern C++ features. You'll learn how to break down challenges, choose the right abstractions, and build solutions that are both maintainable and efficient. Through small examples and a large case study, this book guides you from foundational concepts to high-performance applications. You’ll explore reusable templates, algorithms, modularity, and even parallel computing and GPU acceleration. With each chapter, you’ll not only expand your C++ skillset, but also refine the way you approach and solve real-world problems. Written by a seasoned research engineer and C++ developer, this book combines practical insight with academic rigor. Whether you're designing algorithms or profiling production code, this book helps you deliver elegant, effective solutions with confidence.
Table of Contents (19 chapters)
close
close
18
Index

Using Thrust algorithms and OpenMP offloading

Thrust is NVIDIA’s C++ library for parallel and device-aware containers and algorithms. It (mostly) replicates the standard library interface but allows for the data to be stored on a device (GPU) and for algorithms to be implemented by kernels. Thrust is distributed as part of the CUDA toolkit, so it is generally very easy to use as part of CUDA projects. (AMD ROCm also provides a similar library.)

The backbone containers provided by Thrust are the various vector classes. A Thrust host_vector operates very much like std::vector, with data stored in linear memory that grows geometrically when it runs out of space. On the other hand, a device_vector is entirely contained in GPU memory, with data allocated by cudaMalloc (or equivalent). A third universal_vector is a vector based on unified memory spaces, whereby the transfer of data from host memory to device memory is handled transparently at runtime based on where the data is...

Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
The C++ Programmer's Mindset
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon