Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Celery alternatives – Python-RQ


A lightweight and simpler alternative to Celery is Python-RQ (http://python-rq.org). It is based on Redis alone as a provider of both task queue and result backend. It is intended for those applications where complex task dependencies or task routing is not necessary.

Since Celery and Python-RQ are conceptually very similar, let's jump right in and rewrite one of our earlier examples. Create a new Python script (rq/currency.py) with the following command:

import urllib.request


URL = 'http://finance.yahoo.com/d/quotes.csv?s={}=X&f=p'


def get_rate(pair, url_tmplt=URL):
    # raise Exception('Booo!')

    with urllib.request.urlopen(url_tmplt.format(pair)) as res:
        body = res.read()
    return (pair, float(body.strip()))

This is simply the same code from all the currency examples that we saw so far—nothing new here. The main difference with the Celery implementation is that this code has no dependency on Python-RQ or Redis. Copy this script to the...