Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Summary


We looked at a couple of technologies that we can exploit to make our Python code run faster and, in some cases, use multiple CPUs in our computers. One of these is the use of multiple threads, and the other is the use of multiple processes. Both are supported natively by the Python standard library.

We looked at three modules: threading, for developing multithreaded applications, multiprocessing, for developing process-based parallelism, and concurrent.futures, which provides a high-level asynchronous interface to both.

As far as parallelism goes, these three modules are not the only ones that exist in Python land. Other packages implement their own parallel strategies internally, freeing programmers from doing so themselves. Probably, the best known of these is NumPy, the de-facto standard Python package for array and matrix manipulations. Depending on the BLAS library that it is compiled against, NumPy is able to use multiple threads to speed up complex operations (for example,...