Book Image

Python Parallel Programming Cookbook - Second Edition

By : Giancarlo Zaccone
Book Image

Python Parallel Programming Cookbook - Second Edition

By: Giancarlo Zaccone

Overview of this book

<p>Nowadays, it has become extremely important for programmers to understand the link between the software and the parallel nature of their hardware so that their programs run efficiently on computer architectures. Applications based on parallel programming are fast, robust, and easily scalable. </p><p> </p><p>This updated edition features cutting-edge techniques for building effective concurrent applications in Python 3.7. The book introduces parallel programming architectures and covers the fundamental recipes for thread-based and process-based parallelism. You'll learn about mutex, semaphores, locks, queues exploiting the threading, and multiprocessing modules, all of which are basic tools to build parallel applications. Recipes on MPI programming will help you to synchronize processes using the fundamental message passing techniques with mpi4py. Furthermore, you'll get to grips with asynchronous programming and how to use the power of the GPU with PyCUDA and PyOpenCL frameworks. Finally, you'll explore how to design distributed computing systems with Celery and architect Python apps on the cloud using PythonAnywhere, Docker, and serverless applications. </p><p> </p><p>By the end of this book, you will be confident in building concurrent and high-performing applications in Python.</p>
Table of Contents (16 chapters)
Title Page

Understanding heterogeneous computing

Over the years, the search for better performance for increasingly complex calculations has led to the adoption of new techniques in the use of computers. One of these techniques is called heterogeneous computing, which aims to cooperate with different (or heterogeneous) processors in such a way as to have advantages (in particular) in terms of temporal computational efficiency.

In this context, the processor on which the main program is run (generally the CPU) is called the host, while the coprocessors (for example, the GPUs) are called devices. The latter are generally physically separated from the host and manage their own memory space, which is also separated from the host's memory.

In particular, following significant market demand, the GPU has evolved into a highly parallel processor, transforming...