Book Image

Python Web Scraping Cookbook

By : Michael Heydt
Book Image

Python Web Scraping Cookbook

By: Michael Heydt

Overview of this book

Python Web Scraping Cookbook is a solution-focused book that will teach you techniques to develop high-performance scrapers and deal with crawlers, sitemaps, forms automation, Ajax-based sites, caches, and more. You'll explore a number of real-world scenarios where every part of the development/product life cycle will be fully covered. You will not only develop the skills needed to design and develop reliable performance data flows, but also deploy your codebase to AWS. If you are involved in software engineering, product development, or data mining (or are interested in building data-driven products), you will find this book useful as each recipe has a clear purpose and objective. Right from extracting data from the websites to writing a sophisticated web crawler, the book's independent recipes will be a godsend. This book covers Python libraries, requests, and BeautifulSoup. You will learn about crawling, web spidering, working with Ajax websites, paginated items, and more. You will also learn to tackle problems such as 403 errors, working with proxy, scraping images, and LXML. By the end of this book, you will be able to scrape websites more efficiently and able to deploy and operate your scraper in the cloud.
Table of Contents (13 chapters)

Scraping Python.org in urllib3 and Beautiful Soup

In this recipe we swap out the use of requests for another library urllib3. This is another common library for retrieving data from URLs and for other functions involving URLs such as parsing of the parts of the actual URL and handling various encodings.

Getting ready...

This recipe requires urllib3 installed. So install it with pip:

$ pip install urllib3
Collecting urllib3
Using cached urllib3-1.22-py2.py3-none-any.whl
Installing collected packages: urllib3
Successfully installed urllib3-1.22

How to do it...

The recipe is implemented in 01/02_events_with_urllib3.py. The code is the following:

import urllib3
from bs4 import BeautifulSoup

def get_upcoming_events(url):
req = urllib3.PoolManager()
res = req.request('GET', url)

soup = BeautifulSoup(res.data, 'html.parser')

events = soup.find('ul', {'class': 'list-recent-events'}).findAll('li')

for event in events:
event_details = dict()
event_details['name'] = event.find('h3').find("a").text
event_details['location'] = event.find('span', {'class', 'event-location'}).text
event_details['time'] = event.find('time').text
print(event_details)

get_upcoming_events('https://www.python.org/events/python-events/')

The run it with the python interpreter. You will get identical output to the previous recipe.

How it works

The only difference in this recipe is how we fetch the resource:

req = urllib3.PoolManager()
res = req.request('GET', url)

Unlike Requests, urllib3 doesn't apply header encoding automatically. The reason why the code snippet works in the preceding example is because BS4 handles encoding beautifully. But you should keep in mind that encoding is an important part of scraping. If you decide to use your own framework or use other libraries, make sure encoding is well handled.

There's more...

Requests and urllib3 are very similar in terms of capabilities. it is generally recommended to use Requests when it comes to making HTTP requests. The following code example illustrates a few advanced features:

import requests

# builds on top of urllib3's connection pooling
# session reuses the same TCP connection if
# requests are made to the same host
# see https://en.wikipedia.org/wiki/HTTP_persistent_connection for details
session
= requests.Session()

# You may pass in custom cookie
r = session.get('http://httpbin.org/get', cookies={'my-cookie': 'browser'})
print(r.text)
# '{"cookies": {"my-cookie": "test cookie"}}'

# Streaming is another nifty feature
# From http://docs.python-requests.org/en/master/user/advanced/#streaming-requests
# copyright belongs to reques.org
r = requests.get('http://httpbin.org/stream/20', stream=True)

for line in r.iter_lines():
# filter out keep-alive new lines
if line:
decoded_line = line.decode('utf-8')
print(json.loads(decoded_line))