Book Image

Principles of Data Science - Third Edition

By : Sinan Ozdemir
Book Image

Principles of Data Science - Third Edition

By: Sinan Ozdemir

Overview of this book

Principles of Data Science bridges mathematics, programming, and business analysis, empowering you to confidently pose and address complex data questions and construct effective machine learning pipelines. This book will equip you with the tools to transform abstract concepts and raw statistics into actionable insights. Starting with cleaning and preparation, you’ll explore effective data mining strategies and techniques before moving on to building a holistic picture of how every piece of the data science puzzle fits together. Throughout the book, you’ll discover statistical models with which you can control and navigate even the densest or the sparsest of datasets and learn how to create powerful visualizations that communicate the stories hidden in your data. With a focus on application, this edition covers advanced transfer learning and pre-trained models for NLP and vision tasks. You’ll get to grips with advanced techniques for mitigating algorithmic bias in data as well as models and addressing model and data drift. Finally, you’ll explore medium-level data governance, including data provenance, privacy, and deletion request handling. By the end of this data science book, you'll have learned the fundamentals of computational mathematics and statistics, all while navigating the intricacies of modern ML and large pre-trained models like GPT and BERT.
Table of Contents (18 chapters)

Measuring bias

To successfully combat bias, we must first measure its existence and understand its impact on our ML models. Several statistical methods and techniques have been developed for this purpose, each offering a different perspective on bias and fairness. Here are a few essential methods:

  • Confusion matrix: A fundamental tool for evaluating the performance of an ML model, the confusion matrix can also reveal bias. It allows us to measure false positive and false negative rates, which can help us identify situations where the model performs differently for different groups.
  • Disparate impact analysis: This technique measures the ratio of favorable outcomes for a protected group compared to a non-protected group. If the ratio is significantly below one, it implies a disparate impact on the protected group, signaling potential bias.
  • Equality of odds: This method requires that a model’s error rates be equal across different groups. In other words, if a model...