Book Image

Principles of Data Science - Third Edition

By : Sinan Ozdemir
Book Image

Principles of Data Science - Third Edition

By: Sinan Ozdemir

Overview of this book

Principles of Data Science bridges mathematics, programming, and business analysis, empowering you to confidently pose and address complex data questions and construct effective machine learning pipelines. This book will equip you with the tools to transform abstract concepts and raw statistics into actionable insights. Starting with cleaning and preparation, you’ll explore effective data mining strategies and techniques before moving on to building a holistic picture of how every piece of the data science puzzle fits together. Throughout the book, you’ll discover statistical models with which you can control and navigate even the densest or the sparsest of datasets and learn how to create powerful visualizations that communicate the stories hidden in your data. With a focus on application, this edition covers advanced transfer learning and pre-trained models for NLP and vision tasks. You’ll get to grips with advanced techniques for mitigating algorithmic bias in data as well as models and addressing model and data drift. Finally, you’ll explore medium-level data governance, including data provenance, privacy, and deletion request handling. By the end of this data science book, you'll have learned the fundamentals of computational mathematics and statistics, all while navigating the intricacies of modern ML and large pre-trained models like GPT and BERT.
Table of Contents (18 chapters)

Emerging techniques in bias and fairness in ML

When it comes to the world of tech, one thing is certain – it never stands still. And ML is no exception. The quest for fairness and the need to tackle bias has given rise to some innovative and game-changing techniques. So, put on your techie hats, and let’s dive into some of these groundbreaking developments.

First off, let’s talk about interpretability. In an age where complex ML models are becoming the norm, interpretable models are a breath of fresh air. They’re transparent and easier to understand, and they allow us to gain insights into their decision-making process. Techniques such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are leading the charge in this space. They not only shed light on the “how” and “why” of a model’s decision but also help in identifying any biases lurking in the shadows. We will talk more...