Book Image

Transformers for Natural Language Processing

By : Denis Rothman
Book Image

Transformers for Natural Language Processing

By: Denis Rothman

Overview of this book

The transformer architecture has proved to be revolutionary in outperforming the classical RNN and CNN models in use today. With an apply-as-you-learn approach, Transformers for Natural Language Processing investigates in vast detail the deep learning for machine translations, speech-to-text, text-to-speech, language modeling, question answering, and many more NLP domains with transformers. The book takes you through NLP with Python and examines various eminent models and datasets within the transformer architecture created by pioneers such as Google, Facebook, Microsoft, OpenAI, and Hugging Face. The book trains you in three stages. The first stage introduces you to transformer architectures, starting with the original transformer, before moving on to RoBERTa, BERT, and DistilBERT models. You will discover training methods for smaller transformers that can outperform GPT-3 in some cases. In the second stage, you will apply transformers for Natural Language Understanding (NLU) and Natural Language Generation (NLG). Finally, the third stage will help you grasp advanced language understanding techniques such as optimizing social network datasets and fake news identification. By the end of this NLP book, you will understand transformers from a cognitive science perspective and be proficient in applying pretrained transformer models by tech giants to various datasets.
Table of Contents (16 chapters)
Other Books You May Enjoy

Transformers, reformers, PET, or GPT?

Before using GPT models, we need to stop and look at transformers from a project management perspective at this point in our book's journey. Which model and which method must we choose for a given NLP project? Should we trust any of them? Once we consider cost management, accountability follows, and choosing a model and a machine become life-and-death decisions for a project. In this section, we will stop and think before entering the world of the recent GPT-2 and huge GPT-3 (and more may come) models.

We have successively gone through:

  • The original architecture of the Transformer with an encoder and a decoder stack in Chapter 1, Getting Started with the Model Architecture of the Transformer.
  • Fine-tuning a pretrained BERT model with only an encoder stack and no decoder stack in Chapter 2, Fine-Tuning BERT models.
  • Training a RoBERTa-like model with only an encoder stack and no decoder stack in Chapter 3, Pretraining...