Book Image

Machine Learning for Imbalanced Data

By : Kumar Abhishek, Dr. Mounir Abdelaziz
Book Image

Machine Learning for Imbalanced Data

By: Kumar Abhishek, Dr. Mounir Abdelaziz

Overview of this book

As machine learning practitioners, we often encounter imbalanced datasets in which one class has considerably fewer instances than the other. Many machine learning algorithms assume an equilibrium between majority and minority classes, leading to suboptimal performance on imbalanced data. This comprehensive guide helps you address this class imbalance to significantly improve model performance. Machine Learning for Imbalanced Data begins by introducing you to the challenges posed by imbalanced datasets and the importance of addressing these issues. It then guides you through techniques that enhance the performance of classical machine learning models when using imbalanced data, including various sampling and cost-sensitive learning methods. As you progress, you’ll delve into similar and more advanced techniques for deep learning models, employing PyTorch as the primary framework. Throughout the book, hands-on examples will provide working and reproducible code that’ll demonstrate the practical implementation of each technique. By the end of this book, you’ll be adept at identifying and addressing class imbalances and confidently applying various techniques, including sampling, cost-sensitive techniques, and threshold adjustment, while using traditional machine learning or deep learning models.
Table of Contents (15 chapters)

Cost-Sensitive Learning for decision trees

Decision trees are binary trees that use conditional decision-making to predict the class of the samples. Every tree node represents a set of samples corresponding to a chain of conditional statements based on the features. We divide the node into two children based on a feature and a threshold value. Imagine a set of students with height, weight, age, class, and location. We can divide the set into two parts according to the features of age and with a threshold of 8. Now, all the students with ages less than 8 will go into the left child, and all those with ages greater than or equal to 8 will go into the right child.

This way, we can create a tree by successively choosing features and threshold values. Every leaf node of the tree will contain nodes from only one class, respectively.

A question often arises during the construction of a decision tree: “Which feature and threshold pair should be selected to partition the set of...