Book Image

The Data Wrangling Workshop - Second Edition

By : Brian Lipp, Shubhadeep Roychowdhury, Dr. Tirthajyoti Sarkar
Book Image

The Data Wrangling Workshop - Second Edition

By: Brian Lipp, Shubhadeep Roychowdhury, Dr. Tirthajyoti Sarkar

Overview of this book

While a huge amount of data is readily available to us, it is not useful in its raw form. For data to be meaningful, it must be curated and refined. If you’re a beginner, then The Data Wrangling Workshop will help to break down the process for you. You’ll start with the basics and build your knowledge, progressing from the core aspects behind data wrangling, to using the most popular tools and techniques. This book starts by showing you how to work with data structures using Python. Through examples and activities, you’ll understand why you should stay away from traditional methods of data cleaning used in other languages and take advantage of the specialized pre-built routines in Python. Later, you’ll learn how to use the same Python backend to extract and transform data from an array of sources, including the internet, large database vaults, and Excel financial tables. To help you prepare for more challenging scenarios, the book teaches you how to handle missing or incorrect data, and reformat it based on the requirements from your downstream analytics tool. By the end of this book, you will have developed a solid understanding of how to perform data wrangling with Python, and learned several techniques and best practices to extract, clean, transform, and format your data efficiently, from a diverse array of sources.
Table of Contents (11 chapters)
Preface

Subsetting, Filtering, and Grouping

One of the most important aspects of data wrangling is to curate the data carefully from the deluge of streaming data that pours into an organization or business entity from various sources. Lots of data is not always a good thing; rather, data needs to be useful and of high quality to be effectively used in downstream activities of a data science pipeline, such as machine learning and predictive model building. Moreover, one data source can be used for multiple purposes, and this often requires different subsets of data to be processed by a data wrangling module. This is then passed on to separate analytics modules.

For example, let's say you are doing data wrangling on US state-level economic output. It is a fairly common scenario that one machine learning model may require data for large and populous states (such as California and Texas), while another model demands processed data for small and sparsely populated states (such as Montana...