Book Image

Python Data Cleaning Cookbook

By : Michael Walker
Book Image

Python Data Cleaning Cookbook

By: Michael Walker

Overview of this book

Getting clean data to reveal insights is essential, as directly jumping into data analysis without proper data cleaning may lead to incorrect results. This book shows you tools and techniques that you can apply to clean and handle data with Python. You'll begin by getting familiar with the shape of data by using practices that can be deployed routinely with most data sources. Then, the book teaches you how to manipulate data to get it into a useful form. You'll also learn how to filter and summarize data to gain insights and better understand what makes sense and what does not, along with discovering how to operate on data to address the issues you've identified. Moving on, you'll perform key tasks, such as handling missing values, validating errors, removing duplicate data, monitoring high volumes of data, and handling outliers and invalid dates. Next, you'll cover recipes on using supervised learning and Naive Bayes analysis to identify unexpected values and classification errors, and generate visualizations for exploratory data analysis (EDA) to visualize unexpected values. Finally, you'll build functions and classes that you can reuse without modification when you have new data. By the end of this Python book, you'll be equipped with all the key skills that you need to clean data and diagnose problems within it.
Table of Contents (12 chapters)

Using user-defined functions and apply with groupby

Despite the numerous aggregation functions available in pandas and NumPy, we sometimes have to write our own to get the results we need. In some cases, this requires the use of apply.

Getting ready

We will work with the NLS data in this recipe.

How to do it…

We will create our own functions to define the summary statistics we want by group:

  1. Import pandas and the NLS data:
    >>> import pandas as pd
    >>> import numpy as np
    >>> nls97 = pd.read_csv("data/nls97b.csv")
    >>> nls97.set_index("personid", inplace=True)
  2. Create a function for defining the interquartile range:
    >>> def iqr(x):
    ...   return x.quantile(0.75) - x.quantile(0.25)
    ... 
  3. Run the interquartile range function.

    First, create a dictionary that specifies which aggregation functions to run on each analysis variable:

    >>> aggdict = {'weeksworked06':[&apos...