Book Image

Hands-On Web Scraping with Python - Second Edition

By : Anish Chapagain
Book Image

Hands-On Web Scraping with Python - Second Edition

By: Anish Chapagain

Overview of this book

Web scraping is a powerful tool for extracting data from the web, but it can be daunting for those without a technical background. Designed for novices, this book will help you grasp the fundamentals of web scraping and Python programming, even if you have no prior experience. Adopting a practical, hands-on approach, this updated edition of Hands-On Web Scraping with Python uses real-world examples and exercises to explain key concepts. Starting with an introduction to web scraping fundamentals and Python programming, you’ll cover a range of scraping techniques, including requests, lxml, pyquery, Scrapy, and Beautiful Soup. You’ll also get to grips with advanced topics such as secure web handling, web APIs, Selenium for web scraping, PDF extraction, regex, data analysis, EDA reports, visualization, and machine learning. This book emphasizes the importance of learning by doing. Each chapter integrates examples that demonstrate practical techniques and related skills. By the end of this book, you’ll be equipped with the skills to extract data from websites, a solid understanding of web scraping and Python programming, and the confidence to use these skills in your projects for analysis, visualization, and information discovery.
Table of Contents (20 chapters)
1
Part 1:Python and Web Scraping
4
Part 2:Beginning Web Scraping
8
Part 3:Advanced Scraping Concepts
13
Part 4:Advanced Data-Related Concepts
16
Part 5:Conclusion

Parsing robots.txt and sitemap.xml

In this section, we will introduce robots.txt- and sitemap.xml-related information and follow the instructions or resources available in those two files. We mentioned them in the Data-finding techniques used in web pages section of Chapter 1. In general, we can dive deep into the pages, or the directory with pages, of websites and find data or manage missing or hidden links using the robots.txt and sitemap.xml files.

The robots.txt file

The robots.txt file, or the Robots Exclusion Protocol, is a web-based standard or protocol used by websites to exchange information with automated scripts. robots.txt carries instructions regarding site-based links or resources to web robots (crawlers, spiders, web wanderers, or web bots), and uses directives such as Allow, Disallow, SiteMap, Crawl-delay, and User-agent to direct robots’ behavior.

We can find robots.txt by adding robots.txt to the main URL. For example, robots.txt for https://www.python...