Book Image

Data Augmentation with Python

By : Duc Haba
Book Image

Data Augmentation with Python

By: Duc Haba

Overview of this book

Data is paramount in AI projects, especially for deep learning and generative AI, as forecasting accuracy relies on input datasets being robust. Acquiring additional data through traditional methods can be challenging, expensive, and impractical, and data augmentation offers an economical option to extend the dataset. The book teaches you over 20 geometric, photometric, and random erasing augmentation methods using seven real-world datasets for image classification and segmentation. You’ll also review eight image augmentation open source libraries, write object-oriented programming (OOP) wrapper functions in Python Notebooks, view color image augmentation effects, analyze safe levels and biases, as well as explore fun facts and take on fun challenges. As you advance, you’ll discover over 20 character and word techniques for text augmentation using two real-world datasets and excerpts from four classic books. The chapter on advanced text augmentation uses machine learning to extend the text dataset, such as Transformer, Word2vec, BERT, GPT-2, and others. While chapters on audio and tabular data have real-world data, open source libraries, amazing custom plots, and Python Notebook, along with fun facts and challenges. By the end of this book, you will be proficient in image, text, audio, and tabular data augmentation techniques.
Table of Contents (17 chapters)
Part 1: Data Augmentation
Part 2: Image Augmentation
Part 3: Text Augmentation
Part 4: Audio Data Augmentation
Part 5: Tabular Data Augmentation

Word augmenting

In this chapter, the word augmenting techniques are similar to the methods from Chapter 5, which used the Nlpaug library. The difference is that rather than Python libraries, the wrapper functions use powerful ML models to achieve remarkable results. Sometimes, the output or rewritten text is akin to human writers.

In particular, you will learn four new techniques and two variants each. Let’s start with Word2Vec:

  • The Word2Vec method uses the neural network NLP Word2Vec algorithm and the GoogleNews-vectors-negative300 pre-trained model. Google trained it using a large corpus containing about 100 billion words and 300 dimensions. Substitute and insert are the two mode variants.
  • The BERT method uses Google’s transformer algorithm and BERT pre-trained model. Substitute and insert are the two mode variants.
  • The RoBERTa method is a variation of the BERT model. Substitute and insert are the two mode variants.
  • The last word augmenting technique...