Book Image

Data Augmentation with Python

By : Duc Haba
Book Image

Data Augmentation with Python

By: Duc Haba

Overview of this book

Data is paramount in AI projects, especially for deep learning and generative AI, as forecasting accuracy relies on input datasets being robust. Acquiring additional data through traditional methods can be challenging, expensive, and impractical, and data augmentation offers an economical option to extend the dataset. The book teaches you over 20 geometric, photometric, and random erasing augmentation methods using seven real-world datasets for image classification and segmentation. You’ll also review eight image augmentation open source libraries, write object-oriented programming (OOP) wrapper functions in Python Notebooks, view color image augmentation effects, analyze safe levels and biases, as well as explore fun facts and take on fun challenges. As you advance, you’ll discover over 20 character and word techniques for text augmentation using two real-world datasets and excerpts from four classic books. The chapter on advanced text augmentation uses machine learning to extend the text dataset, such as Transformer, Word2vec, BERT, GPT-2, and others. While chapters on audio and tabular data have real-world data, open source libraries, amazing custom plots, and Python Notebook, along with fun facts and challenges. By the end of this book, you will be proficient in image, text, audio, and tabular data augmentation techniques.
Table of Contents (17 chapters)
1
Part 1: Data Augmentation
4
Part 2: Image Augmentation
7
Part 3: Text Augmentation
10
Part 4: Audio Data Augmentation
13
Part 5: Tabular Data Augmentation

Random erasing

Random erasing selects a rectangle region in an image and replaces or overlays it with a gray, black, white, or Gaussian noise pixels rectangle. It is counterintuitive to why this technique increases the AI model’s forecasting accuracy.

The strength of any ML model, especially CNN, is in predicting or forecasting data that has not been seen in the training or validating stage. Thus, dropout, where randomly selected neurons are ignored during training, is a well-proven method to reduce overfitting and increase accuracy. Therefore, random erasing has the same effect as increasing the dropout rate.

A paper called Random Erasing Data Augmentation, which was published on November 16, 2017, by arXiv, shows how random erasing increases accuracy and reduces overfitting in a CNN-based model. The paper’s authors are Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang from the Cognitive Science Department, at Xiamen University, China, and the University...