Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Celery in production


Here are some helpful tips on how to run a large Celery application in a production environment.

The first suggestion is to use a configuration module for your Celery application rather than configuring the Celery app in your worker code. Assuming that your configuration file is called config.py, you can pass it to a Celery application as follows:

import celery
app = celery.Celery('mergesort')
app.config_from_object('config')

Then, together with any other configuration directive that might be relevant to the specific application being developed, put the following code in config.py:

BROKER_URL = 'amqp://HOST1'
CELERY_RESULT_BACKEND = 'redis://HOST2'

Probably, the main performance-related suggestion would be to use more than one queue so that tasks can be prioritized and/or separated based on their expected runtime. Using multiple queues and routing tasks to the appropriate queue is a simple way to assign more horsepower (that is, workers) to one group of tasks. Celery...