Book Image

Python Web Scraping - Second Edition

By : Katharine Jarmul
Book Image

Python Web Scraping - Second Edition

By: Katharine Jarmul

Overview of this book

The Internet contains the most useful set of data ever assembled, most of which is publicly accessible for free. However, this data is not easily usable. It is embedded within the structure and style of websites and needs to be carefully extracted. Web scraping is becoming increasingly useful as a means to gather and make sense of the wealth of information available online. This book is the ultimate guide to using the latest features of Python 3.x to scrape data from websites. In the early chapters, you'll see how to extract data from static web pages. You'll learn to use caching with databases and files to save time and manage the load on servers. After covering the basics, you'll get hands-on practice building a more sophisticated crawler using browsers, crawlers, and concurrent scrapers. You'll determine when and how to scrape data from a JavaScript-dependent website using PyQt and Selenium. You'll get a better understanding of how to submit forms on complex websites protected by CAPTCHA. You'll find out how to automate these actions with Python packages such as mechanize. You'll also learn how to create class-based scrapers with Scrapy libraries and implement your learning on real websites. By the end of the book, you will have explored testing websites with scrapers, remote scraping, best practices, working with images, and many other relevant topics.
Table of Contents (10 chapters)

Starting a project

Now that Scrapy is installed, we can run the startproject command to generate the default structure for our first Scrapy project.

To do this, open the terminal and navigate to the directory where you want to store your Scrapy project, and then run scrapy startproject <project name>. Here, we will use example for the project name:

$ scrapy startproject example
$ cd example

Here are the files generated by the scrapy command:

    scrapy.cfg 
example/
__init__.py
items.py
middlewares.py
pipelines.py
settings.py
spiders/
__init__.py

The important files for this chapter (and in general for Scrapy use) are as follows:

  • items.py: This file defines a model of the fields that will be scraped
  • settings.py: This file defines settings, such as the user agent and crawl delay
  • spiders/: The actual scraping and crawling code are stored...