Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

A tour of Celery


What is a distributed task queue and how does Celery implement one? It turns out that distributed task queues are a type of architecture that has been around for quite some time. They are a form of master-worker architecture with a middleware layer that uses a set of queues for work requests (that is, the task queues) and a queue, or a storage area, to hold the results (that is, the result backend).

The master process (also called a client or producer) puts work requests (that is, tasks) into one of the task queues and fetches results from the result backend. Worker processes, on the other hand, subscribe to some or all of the task queues to know what work to perform and put their results (if any) into the result backend.

This is a very simple and flexible architecture. Master processes do not need to know how many workers are available nor on which machines they are running. They just need to know where the queues are and how to post a request for work.

The same can be said...