Book Image

Hands-On Data Analysis with Pandas - Second Edition

By : Stefanie Molin
5 (1)
Book Image

Hands-On Data Analysis with Pandas - Second Edition

5 (1)
By: Stefanie Molin

Overview of this book

Extracting valuable business insights is no longer a ‘nice-to-have’, but an essential skill for anyone who handles data in their enterprise. Hands-On Data Analysis with Pandas is here to help beginners and those who are migrating their skills into data science get up to speed in no time. This book will show you how to analyze your data, get started with machine learning, and work effectively with the Python libraries often used for data science, such as pandas, NumPy, matplotlib, seaborn, and scikit-learn. Using real-world datasets, you will learn how to use the pandas library to perform data wrangling to reshape, clean, and aggregate your data. Then, you will learn how to conduct exploratory data analysis by calculating summary statistics and visualizing the data to find patterns. In the concluding chapters, you will explore some applications of anomaly detection, regression, clustering, and classification using scikit-learn to make predictions based on past data. This updated edition will equip you with the skills you need to use pandas 1.x to efficiently perform various data manipulation tasks, reliably reproduce analyses, and visualize your data for effective decision making – valuable knowledge that can be applied across multiple domains.
Table of Contents (21 chapters)
1
Section 1: Getting Started with Pandas
4
Section 2: Using Pandas for Data Analysis
9
Section 3: Applications – Real-World Analyses Using Pandas
12
Section 4: Introduction to Machine Learning with Scikit-Learn
16
Section 5: Additional Resources
18
Solutions

Addressing class imbalance

When faced with a class imbalance in our data, we may want to try to balance the training data before we build a model around it. In order to do this, we can use one of the following imbalanced sampling techniques:

  • Over-sample the minority class.
  • Under-sample the majority class.

In the case of over-sampling, we pick a larger proportion from the minority class in order to get closer to the amount of the majority class; this may involve a technique such as bootstrapping or generating new data similar to the values in the existing data (using machine learning algorithms such as nearest neighbors). Under-sampling, on the other hand, will take less data overall by reducing the amount taken from the majority class. The decision to use over-sampling or under-sampling will depend on the amount of data we started with, and in some cases, computational costs. In practice, we wouldn't try either of these without first trying to build the model...