Book Image

Machine Learning for Imbalanced Data

By : Kumar Abhishek, Dr. Mounir Abdelaziz
Book Image

Machine Learning for Imbalanced Data

By: Kumar Abhishek, Dr. Mounir Abdelaziz

Overview of this book

As machine learning practitioners, we often encounter imbalanced datasets in which one class has considerably fewer instances than the other. Many machine learning algorithms assume an equilibrium between majority and minority classes, leading to suboptimal performance on imbalanced data. This comprehensive guide helps you address this class imbalance to significantly improve model performance. Machine Learning for Imbalanced Data begins by introducing you to the challenges posed by imbalanced datasets and the importance of addressing these issues. It then guides you through techniques that enhance the performance of classical machine learning models when using imbalanced data, including various sampling and cost-sensitive learning methods. As you progress, you’ll delve into similar and more advanced techniques for deep learning models, employing PyTorch as the primary framework. Throughout the book, hands-on examples will provide working and reproducible code that’ll demonstrate the practical implementation of each technique. By the end of this book, you’ll be adept at identifying and addressing class imbalances and confidently applying various techniques, including sampling, cost-sensitive techniques, and threshold adjustment, while using traditional machine learning or deep learning models.
Table of Contents (15 chapters)

The impact of calibration on a model’s performance

Accuracy, log-loss, and Brier scores usually improve because of calibration. However, since the model calibration still involves approximately fitting a model to the calibration curve plotted on the held-out calibration dataset, it may sometimes worsen the accuracy or other performance metrics by small amounts. Nevertheless, the benefits of having calibrated probabilities in terms of giving us actual interpretable probability values that represent likelihood far outweigh the slight performance impact.

As discussed in Chapter 1, Introduction to Data Imbalance in Machine Learning, ROC-AUC is a rank-based metric, meaning it evaluates the model’s ability to distinguish between classes based on the ranking of predicted scores rather than their absolute values. ROC-AUC doesn’t make any claim about accurate probability estimates. Strictly monotonic calibration functions, which continuously increase or decrease without...