Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

More complex Celery applications


We will implement two simple but interesting applications using Celery. The first one is a reimplementation of the currency exchange rate example from Chapter 3, Parallelism in Python, and the second one is a distributed sort algorithm.

We are going to use a total of four machines again (HOST1, HOST2, HOST3, and HOST4) for all these examples. As we did before, machine one (HOST1) will run RabbitMQ. The second machine (HOST2) will run Redis, the third one (HOST3) will run Celery workers, and finally, the fourth one (HOST4) will run our main code.

Let's start with a simple example. Create a new Python script (celery/currency.py) and write the following code (if you're not using Redis, remember to change backend to 'amqp://HOST1'):

import celery
import urllib.request


app = celery.Celery('currency',
                    broker='amqp://HOST1',
                    backend='redis://HOST2')


URL = 'http://finance.yahoo.com/d/quotes.csv?s={}=X&f=p'

@app.task...