Book Image

Parallel Programming with Python

By : Jan Palach, Jan Palach V Cruz da Silva
Book Image

Parallel Programming with Python

By: Jan Palach, Jan Palach V Cruz da Silva

Overview of this book

Table of Contents (16 chapters)
Parallel Programming with Python
About the Author
About the Reviewers

Using Celery to make a distributed Web crawler

We will now move on to adapting our Web crawler to Celery. We already have webcrawler_queue, which is responsible for encapsulating web-type hcrawler tasks. However, in the server side, we will create our crawl_task task inside the module.

First, we will add our imports to the re and requests modules, which are the modules for regular expression and the HTTP library respectively. The code is as follows:

import re
import requests

Then, we will define our regular expression, which we studied in the previous chapters, as follows:

hTML_link_regex = re.compile(

Now, we will place our crawl_task function in the Web crawler, add the @app.task decorator, and change the return message a bit, as follows:

def crawl_task(url):
    request_data = requests.get(url)
    links = html_link_regex.findall(request_data.text)
    message = "The task %s found the following links %s.."\
    Return message...