Book Image

Data Augmentation with Python

By : Duc Haba
Book Image

Data Augmentation with Python

By: Duc Haba

Overview of this book

Data is paramount in AI projects, especially for deep learning and generative AI, as forecasting accuracy relies on input datasets being robust. Acquiring additional data through traditional methods can be challenging, expensive, and impractical, and data augmentation offers an economical option to extend the dataset. The book teaches you over 20 geometric, photometric, and random erasing augmentation methods using seven real-world datasets for image classification and segmentation. You’ll also review eight image augmentation open source libraries, write object-oriented programming (OOP) wrapper functions in Python Notebooks, view color image augmentation effects, analyze safe levels and biases, as well as explore fun facts and take on fun challenges. As you advance, you’ll discover over 20 character and word techniques for text augmentation using two real-world datasets and excerpts from four classic books. The chapter on advanced text augmentation uses machine learning to extend the text dataset, such as Transformer, Word2vec, BERT, GPT-2, and others. While chapters on audio and tabular data have real-world data, open source libraries, amazing custom plots, and Python Notebook, along with fun facts and challenges. By the end of this book, you will be proficient in image, text, audio, and tabular data augmentation techniques.
Table of Contents (17 chapters)
Part 1: Data Augmentation
Part 2: Image Augmentation
Part 3: Text Augmentation
Part 4: Audio Data Augmentation
Part 5: Tabular Data Augmentation

Machine learning models

In this chapter, the text augmentation wrapper functions use ML to generate new text for training the ML model. Understanding how these models are built is not in scope, but a brief description of these ML models and their algorithms is necessary. The Python wrapper functions will use the following ML models under the hood:

  • Tomáš Mikolov published the NLP algorithm using a neural network named Word2Vec in 2013. The model can propose synonym words from the input text.
  • The Global Vectors for Word Representation (GloVe) algorithm was created by Jeffrey Pennington, Richard Socher, and Christopher D. Manning in 2014. It is an unsupervised learning NLP algorithm for representing words in vector format. The results are a linear algorithm that groups the closest neighboring words.
  • Wiki-news-300d-1M is a pre-trained ML model that uses the fastText open source library. It was trained on 1 million words from Wikipedia 2017 articles, the UMBC...