Book Image

Data Augmentation with Python

By : Duc Haba
Book Image

Data Augmentation with Python

By: Duc Haba

Overview of this book

Data is paramount in AI projects, especially for deep learning and generative AI, as forecasting accuracy relies on input datasets being robust. Acquiring additional data through traditional methods can be challenging, expensive, and impractical, and data augmentation offers an economical option to extend the dataset. The book teaches you over 20 geometric, photometric, and random erasing augmentation methods using seven real-world datasets for image classification and segmentation. You’ll also review eight image augmentation open source libraries, write object-oriented programming (OOP) wrapper functions in Python Notebooks, view color image augmentation effects, analyze safe levels and biases, as well as explore fun facts and take on fun challenges. As you advance, you’ll discover over 20 character and word techniques for text augmentation using two real-world datasets and excerpts from four classic books. The chapter on advanced text augmentation uses machine learning to extend the text dataset, such as Transformer, Word2vec, BERT, GPT-2, and others. While chapters on audio and tabular data have real-world data, open source libraries, amazing custom plots, and Python Notebook, along with fun facts and challenges. By the end of this book, you will be proficient in image, text, audio, and tabular data augmentation techniques.
Table of Contents (17 chapters)
Part 1: Data Augmentation
Part 2: Image Augmentation
Part 3: Text Augmentation
Part 4: Audio Data Augmentation
Part 5: Tabular Data Augmentation

Human biases

Human biases are even harder to calculate using Python code. There is no Python or other language library for computing a numeric score for human bias in a dataset. We rely on observation to spot such human biases. It is time-consuming to manually study a particular dataset before deriving possible human biases. We could argue that it is not a programmer’s or data scientist’s job because there is no programable method to follow.

Human biases reflect systematic errors in human thought. In other words, when you develop an AI system, you are limited by the algorithm and data chosen by you. Thus, the prediction of a limited outcome could be biased by your selections. These prejudices are implicit in individuals, groups, institutions, businesses, education, and government.

There is a wide variety of human biases. Cognitive and perceptual biases show themselves in all domains and are not unique to human interactions with AI. There is an entire field of study...