Book Image

Python Data Cleaning Cookbook

By : Michael Walker
Book Image

Python Data Cleaning Cookbook

By: Michael Walker

Overview of this book

Getting clean data to reveal insights is essential, as directly jumping into data analysis without proper data cleaning may lead to incorrect results. This book shows you tools and techniques that you can apply to clean and handle data with Python. You'll begin by getting familiar with the shape of data by using practices that can be deployed routinely with most data sources. Then, the book teaches you how to manipulate data to get it into a useful form. You'll also learn how to filter and summarize data to gain insights and better understand what makes sense and what does not, along with discovering how to operate on data to address the issues you've identified. Moving on, you'll perform key tasks, such as handling missing values, validating errors, removing duplicate data, monitoring high volumes of data, and handling outliers and invalid dates. Next, you'll cover recipes on using supervised learning and Naive Bayes analysis to identify unexpected values and classification errors, and generate visualizations for exploratory data analysis (EDA) to visualize unexpected values. Finally, you'll build functions and classes that you can reuse without modification when you have new data. By the end of this Python book, you'll be equipped with all the key skills that you need to clean data and diagnose problems within it.
Table of Contents (12 chapters)

Melting multiple groups of columns

When we needed to melt multiple groups of columns in the previous recipe, we used melt twice and then merged the resulting DataFrames. That worked fine, but we can accomplish the same tasks in one step with the wide_to_long function. wide_to_long has more functionality than melt, but is a bit more complicated to use.

Getting ready...

We will work with the weeks worked and college enrollment data from the NLS in this recipe.

How to do it…

We will transform multiple groups of columns at once using wide_to_long:

  1. Import pandas and load the NLS data:
    >>> import pandas as pd
    >>> nls97 = pd.read_csv("data/nls97f.csv")
    >>> nls97.set_index('personid', inplace=True)
  2. View some of the weeks worked and college enrollment data:
    >>> weeksworkedcols = ['weeksworked00','weeksworked01','weeksworked02',
    ...   'weeksworked03',&apos...