Book Image

Data Augmentation with Python

By : Duc Haba
Book Image

Data Augmentation with Python

By: Duc Haba

Overview of this book

Data is paramount in AI projects, especially for deep learning and generative AI, as forecasting accuracy relies on input datasets being robust. Acquiring additional data through traditional methods can be challenging, expensive, and impractical, and data augmentation offers an economical option to extend the dataset. The book teaches you over 20 geometric, photometric, and random erasing augmentation methods using seven real-world datasets for image classification and segmentation. You’ll also review eight image augmentation open source libraries, write object-oriented programming (OOP) wrapper functions in Python Notebooks, view color image augmentation effects, analyze safe levels and biases, as well as explore fun facts and take on fun challenges. As you advance, you’ll discover over 20 character and word techniques for text augmentation using two real-world datasets and excerpts from four classic books. The chapter on advanced text augmentation uses machine learning to extend the text dataset, such as Transformer, Word2vec, BERT, GPT-2, and others. While chapters on audio and tabular data have real-world data, open source libraries, amazing custom plots, and Python Notebook, along with fun facts and challenges. By the end of this book, you will be proficient in image, text, audio, and tabular data augmentation techniques.
Table of Contents (17 chapters)
Part 1: Data Augmentation
Part 2: Image Augmentation
Part 3: Text Augmentation
Part 4: Audio Data Augmentation
Part 5: Tabular Data Augmentation

Systemic biases

If we cannot conceive a method to calculate computational and human biases, then it is impossible to devise an algorithm to compute systemic biases programmatically. We must rely on human judgment to spot the systemic bias in the dataset. Furthermore, it has to be specific to a particular dataset with a distinct AI prediction goal. There are no generalization rules and no fairness matrix to follow.

Systemic biases in AI are the most notorious of all AI biases. Simply put, systemic discrimination is when a business, institution, or government limits access to AI benefits to a group and excludes other underserved groups. It is insidious because it hides behind society’s existing rules and norms. Institutional racism and sexism are the most common examples. Another AI accessibility issue in everyday occurrences is limiting or excluding admission to people with disabilities, such as the sight and hearing impaired.

The poor and the underserved have no representation...