Book Image

The Data Wrangling Workshop - Second Edition

By : Brian Lipp, Shubhadeep Roychowdhury, Dr. Tirthajyoti Sarkar
Book Image

The Data Wrangling Workshop - Second Edition

By: Brian Lipp, Shubhadeep Roychowdhury, Dr. Tirthajyoti Sarkar

Overview of this book

While a huge amount of data is readily available to us, it is not useful in its raw form. For data to be meaningful, it must be curated and refined. If you’re a beginner, then The Data Wrangling Workshop will help to break down the process for you. You’ll start with the basics and build your knowledge, progressing from the core aspects behind data wrangling, to using the most popular tools and techniques. This book starts by showing you how to work with data structures using Python. Through examples and activities, you’ll understand why you should stay away from traditional methods of data cleaning used in other languages and take advantage of the specialized pre-built routines in Python. Later, you’ll learn how to use the same Python backend to extract and transform data from an array of sources, including the internet, large database vaults, and Excel financial tables. To help you prepare for more challenging scenarios, the book teaches you how to handle missing or incorrect data, and reformat it based on the requirements from your downstream analytics tool. By the end of this book, you will have developed a solid understanding of how to perform data wrangling with Python, and learned several techniques and best practices to extract, clean, transform, and format your data efficiently, from a diverse array of sources.
Table of Contents (11 chapters)

7. Advanced Web Scraping and Data Gathering

Activity 7.01: Extracting the Top 100 e-books from Gutenberg


These are the steps to complete this activity:

  1. Import the necessary libraries, including regex and BeautifulSoup:
    import urllib.request, urllib.parse, urllib.error
    import requests
    from bs4 import BeautifulSoup
    import ssl
    import re
  2. Read the HTML from the URL:
    top100url = ''
    response = requests.get(top100url)
  3. Write a small function to check the status of the web request:
    def status_check(r):
        if r.status_code==200:
            return 1
            return -1
  4. Check the status of response:

    The output is as follows...