Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Testing the installation


Let's try a quick example to make sure that our Celery installation is working. We will need four terminal windows on, ideally, three different machines (again, we will call them HOST1, HOST2, HOST3, and HOST4). We will start RabbitMQ in one window on HOST1, as shown in the following command (make sure that you use the correct path to rabbitmq-server):

HOST1 $ sudo /usr/local/sbin/rabbitmq-server

In another terminal window (on HOST2), start Redis (if you did not install it, skip to the next paragraph) as follows (make sure to use the correct path to redis-server):

HOST2 $ sudo /usr/local/bin/redis-server

Finally, in a third window (HOST3), create the following Python script (always remember to activate our virtual environment using workon book) and call it test.py:

import celery

app = celery.Celery('test',
                        broker='amqp://HOST1',
                        backend='redis://HOST2')


@app.task
def echo(message):
    return message

What this code...