Book Image

Python Web Scraping - Second Edition

By : Katharine Jarmul
Book Image

Python Web Scraping - Second Edition

By: Katharine Jarmul

Overview of this book

The Internet contains the most useful set of data ever assembled, most of which is publicly accessible for free. However, this data is not easily usable. It is embedded within the structure and style of websites and needs to be carefully extracted. Web scraping is becoming increasingly useful as a means to gather and make sense of the wealth of information available online. This book is the ultimate guide to using the latest features of Python 3.x to scrape data from websites. In the early chapters, you'll see how to extract data from static web pages. You'll learn to use caching with databases and files to save time and manage the load on servers. After covering the basics, you'll get hands-on practice building a more sophisticated crawler using browsers, crawlers, and concurrent scrapers. You'll determine when and how to scrape data from a JavaScript-dependent website using PyQt and Selenium. You'll get a better understanding of how to submit forms on complex websites protected by CAPTCHA. You'll find out how to automate these actions with Python packages such as mechanize. You'll also learn how to create class-based scrapers with Scrapy libraries and implement your learning on real websites. By the end of the book, you will have explored testing websites with scrapers, remote scraping, best practices, working with images, and many other relevant topics.
Table of Contents (10 chapters)

Scraping with the shell command

Now that Scrapy can crawl the countries, we can define what data to scrape. To help test how to extract data from a web page, Scrapy comes with a handy command called shell which presents us with the Scrapy API via an Python or IPython interpreter.

We can call the command using the URL we would like to start with, like so:

$ scrapy shell http://example.webscraping.com/view/United-Kingdom-239
...
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x7fd18a669cc0>
[s] item {}
[s] request <GET http://example.webscraping.com/view/United-Kingdom-239>
[s] response <200 http://example.webscraping.com/view/United-Kingdom-239>
[s] settings <scrapy.settings.Settings object at 0x7fd189655940>
[s] spider <CountrySpider 'country' at 0x7fd1893dd320...