Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Chapter 3. Parallelism in Python

We mentioned threads, processes, and in general, parallel programming in the previous two chapters. We talked, at a very high level and very much in abstract terms, about how you can organize code so that some portions run in parallel, potentially on multiple CPUs or even multiple machines.

In this chapter, we will look at parallel programming in more detail and see which facilities Python offers us to make our code use more than one CPU or CPU core at the time (but always within the boundaries of a single machine). The main goal here will be speed for CPU-intensive problems, and responsiveness for I/O-intensive code.

The good news is that we can write parallel programs in Python using just modules in the standard library and nothing else. This is not to say that no external libraries and tools might be relevant—quite the opposite. It is just that the Standard Library is enough for what we will try and do in this chapter.

In this chapter, we will cover the following...