Book Image

Transformers for Natural Language Processing - Second Edition

By : Denis Rothman
5 (1)
Book Image

Transformers for Natural Language Processing - Second Edition

5 (1)
By: Denis Rothman

Overview of this book

Transformers are...well...transforming the world of AI. There are many platforms and models out there, but which ones best suit your needs? Transformers for Natural Language Processing, 2nd Edition, guides you through the world of transformers, highlighting the strengths of different models and platforms, while teaching you the problem-solving skills you need to tackle model weaknesses. You'll use Hugging Face to pretrain a RoBERTa model from scratch, from building the dataset to defining the data collator to training the model. If you're looking to fine-tune a pretrained model, including GPT-3, then Transformers for Natural Language Processing, 2nd Edition, shows you how with step-by-step guides. The book investigates machine translations, speech-to-text, text-to-speech, question-answering, and many more NLP tasks. It provides techniques to solve hard language problems and may even help with fake news anxiety (read chapter 13 for more details). You'll see how cutting-edge platforms, such as OpenAI, have taken transformers beyond language into computer vision tasks and code creation using DALL-E 2, ChatGPT, and GPT-4. By the end of this book, you'll know how transformers work and how to implement them and resolve issues like an AI detective.
Table of Contents (25 chapters)
18
Other Books You May Enjoy
19
Index
Appendix I — Terminology of Transformer Models

What this book covers

Part I: Introduction to Transformer Architectures

Chapter 1, What are Transformers?, explains, at a high level, what transformers are. We’ll look at the transformer ecosystem and the properties of foundation models. The chapter highlights many of the platforms available and the evolution of Industry 4.0 AI specialists.

Chapter 2, Getting Started with the Architecture of the Transformer Model, goes through the background of NLP to understand how RNN, LSTM, and CNN deep learning architectures evolved into the Transformer architecture that opened a new era. We will go through the Transformer’s architecture through the unique Attention Is All You Need approach invented by the Google Research and Google Brain authors. We will describe the theory of transformers. We will get our hands dirty in Python to see how the multi-attention head sub-layers work. By the end of this chapter, you will have understood the original architecture of the Transformer. You will be ready to explore the multiple variants and usages of the Transformer in the following chapters.

Chapter 3, Fine-Tuning BERT Models, builds on the architecture of the original Transformer. Bidirectional Encoder Representations from Transformers (BERT) shows you a new way of perceiving the world of NLP. Instead of analyzing a past sequence to predict a future sequence, BERT attends to the whole sequence! We will first go through the key innovations of BERT’s architecture and then fine-tune a BERT model by going through each step in a Google Colaboratory notebook. Like humans, BERT can learn tasks and perform other new ones without having to learn the topic from scratch.

Chapter 4, Pretraining a RoBERTa Model from Scratch, builds a RoBERTa transformer model from scratch using the Hugging Face PyTorch modules. The transformer will be both BERT-like and DistilBERT-like. First, we will train a tokenizer from scratch on a customized dataset. The trained transformer will then run on a downstream masked language modeling task.

Part II: Applying Transformers for Natural Language Understanding and Generation

Chapter 5, Downstream NLP Tasks with Transformers, reveals the magic of transformer models with downstream NLP tasks. A pretrained transformer model can be fine-tuned to solve a range of NLP tasks such as BoolQ, CB, MultiRC, RTE, WiC, and more, dominating the GLUE and SuperGLUE leaderboards. We will go through the evaluation process of transformers, the tasks, datasets, and metrics. We will then run some of the downstream tasks with Hugging Face’s pipeline of transformers.

Chapter 6, Machine Translation with the Transformer, defines machine translation to understand how to go from human baselines to machine transduction methods. We will then preprocess a WMT French-English dataset from the European Parliament. Machine translation requires precise evaluation methods, and in this chapter, we explore the BLEU scoring method. Finally, we will implement a Transformer machine translation model with Trax.

Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines, explores many aspects of OpenAI’s GPT-2 and GPT-3 transformers. We will first examine the architecture of OpenAI’s GPT models before explaining the different GPT-3 engines. Then we will run a GPT-2 345M parameter model and interact with it to generate text. Next, we’ll see the GPT-3 playground in action before coding a GPT-3 model for NLP tasks and comparing the results to GPT-2.

Chapter 8, Applying Transformers to Legal and Financial Documents for AI Text Summarization, goes through the concepts and architecture of the T5 transformer model. We will initialize a T5 model from Hugging Face to summarize documents. We will task the T5 model to summarize various documents, including a sample from the Bill of Rights, exploring the successes and limitations of transfer learning approaches applied to transformers. Finally, we will use GPT-3 to summarize some corporation law text to a second-grader.

Chapter 9, Matching Tokenizers and Datasets, analyzes the limits of tokenizers and looks at some of the methods applied to improve the data encoding process’s quality. We will first build a Python program to investigate why some words are omitted or misinterpreted by word2vector tokenizers. Following this, we find the limits of pretrained tokenizers with a tokenizer-agonistic method.

We will improve a T5 summary by applying some of the ideas that show that there is still much room left to improve the methodology of the tokenization process. Finally, we will test the limits of GPT-3’s language understanding.

Chapter 10, Semantic Role Labeling with BERT-Based Transformers, explores how transformers learn to understand a text’s content. Semantic Role Labeling (SRL) is a challenging exercise for a human. Transformers can produce surprising results. We will implement a BERT-based transformer model designed by the Allen Institute for AI in a Google Colab notebook. We will also use their online resources to visualize SRL outputs. Finally, we will question the scope of SRL and understand the reasons behind its limitations.

Part III: Advanced Language Understanding Techniques

Chapter 11, Let Your Data Do the Talking: Story, Questions, and Answers, shows how a transformer can learn how to reason. A transformer must be able to understand a text, a story, and also display reasoning skills. We will see how question answering can be enhanced by adding NER and SRL to the process. We will build the blueprint for a question generator that can be used to train transformers or as a stand-alone solution.

Chapter 12, Detecting Customer Emotions to Make Predictions, shows how transformers have improved sentiment analysis. We will analyze complex sentences using the Stanford Sentiment Treebank, challenging several transformer models to understand not only the structure of a sequence but also its logical form. We will see how to use transformers to make predictions that trigger different actions depending on the sentiment analysis output. The chapter finishes with some edge cases using GPT-3.

Chapter 13, Analyzing Fake News with Transformers, delves into the hot topic of fake news and how transformers can help us understand the different perspectives of the online content we see each day. Every day, billions of messages, posts, and articles are published on the web through social media, websites, and every form of real-time communication available. Using several techniques from the previous chapters, we will analyze debates on climate change and gun control and the Tweets from a former president. We will go through the moral and ethical problem of determining what can be considered fake news beyond reasonable doubt and what news remains subjective.

Chapter 14, Interpreting Black Box Transformer Models, lifts the lid on the black box that is transformer models by visualizing their activity. We will use BertViz to visualize attention heads and Language Interpretability Tool (LIT) to carry out a principal component analysis (PCA). Finally, we will use LIME to visualize transformers via dictionary learning.

Chapter 15, From NLP to Task-Agnostic Transformer Models, delves into the advanced models, Reformer and DeBERTa, running examples using Hugging Face. Transformers can process images as sequences of words. We will also look at different vision transformers such as ViT, CLIP, and DALL-E. We will test them on computer vision tasks, including generating computer images.

Chapter 16, The Emergence of Transformer-Driven Copilots, explores the maturity of Industry 4.0. The chapter begins with prompt engineering examples using informal/casual English. Next, we will use GitHub Copilot to assist with creating code. We will see that vision transformers can help NLP transformers visualize the world around them. We will create a transformer-based recommendation system, which can be used by digital humans in whatever metaverse you may end up in!

Chapter 17, The Consolidation of Suprahuman Transformers with OpenAI’s ChatGPT and GPT-4, builds on the previous chapters, exploring OpenAI’s state-of-the-art transformer models. We will set up conversational AI with ChatGPT and learn how it can explain transformer outputs using explainable AI. We will explore GPT-4 and see how it creates a k-means clustering program from a simple prompt. Advanced Prompt Engineering will be introduced, building on the prompt engineering learned earlier in the book. Finally, we use DALL-E 2 to create and produce variations of an image.

Appendix I, Terminology of Transformer Models, examines the high-level structure of a transformer, from stacks and sublayers to attention heads.

Appendix II, Hardware Constraints for Transformer Models, looks at CPU and GPU performance running transformers. We will see why transformers and GPUs and transformers are a perfect match, concluding with a test using Google Colab CPU, Google Colab Free GPU, and Google Colab Pro GPU.

Appendix III, Generic Text Completion with GPT-2, provides a detailed explanation of generic text completion using GPT-2 from Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines.

Appendix IV, Custom Text Completion with GPT-2, supplements Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines by building and training a GPT-2 model and making it interact with custom text.

Appendix V, Answers to the Questions, provides answers to the questions at the end of each chapter.