Book Image

The Data Wrangling Workshop - Second Edition

By : Brian Lipp, Shubhadeep Roychowdhury, Dr. Tirthajyoti Sarkar
Book Image

The Data Wrangling Workshop - Second Edition

By: Brian Lipp, Shubhadeep Roychowdhury, Dr. Tirthajyoti Sarkar

Overview of this book

While a huge amount of data is readily available to us, it is not useful in its raw form. For data to be meaningful, it must be curated and refined. If you’re a beginner, then The Data Wrangling Workshop will help to break down the process for you. You’ll start with the basics and build your knowledge, progressing from the core aspects behind data wrangling, to using the most popular tools and techniques. This book starts by showing you how to work with data structures using Python. Through examples and activities, you’ll understand why you should stay away from traditional methods of data cleaning used in other languages and take advantage of the specialized pre-built routines in Python. Later, you’ll learn how to use the same Python backend to extract and transform data from an array of sources, including the internet, large database vaults, and Excel financial tables. To help you prepare for more challenging scenarios, the book teaches you how to handle missing or incorrect data, and reformat it based on the requirements from your downstream analytics tool. By the end of this book, you will have developed a solid understanding of how to perform data wrangling with Python, and learned several techniques and best practices to extract, clean, transform, and format your data efficiently, from a diverse array of sources.
Table of Contents (11 chapters)

The Requests and BeautifulSoup Libraries

We will take advantage of two Python libraries in this chapter: requests and BeautifulSoup. To avoid dealing with HTTP methods at a lower level, we will use the requests library. It is an API built on top of pure Python web utility libraries, which makes placing HTTP requests easy and intuitive.

BeautifulSoup is one of the most popular HTML parser packages. It parses the HTML content you pass on and builds a detailed tree of all the tags and markup within the page for easy and intuitive traversal. This tree can be used by a programmer to look for certain markup elements (for example, a table, a hyperlink, or a blob of text within a particular div ID) to scrape useful data.

We are going to do a couple of exercises in order to demonstrate how to use the requests library and decode the contents of the response received when data is fetched from the server.

Exercise 7.01: Using the Requests Library to Get a Response from the Wikipedia Home...