Book Image

Mastering Transformers

By : Savaş Yıldırım, Meysam Asgari- Chenaghlu
Book Image

Mastering Transformers

By: Savaş Yıldırım, Meysam Asgari- Chenaghlu

Overview of this book

Transformer-based language models have dominated natural language processing (NLP) studies and have now become a new paradigm. With this book, you'll learn how to build various transformer-based NLP applications using the Python Transformers library. The book gives you an introduction to Transformers by showing you how to write your first hello-world program. You'll then learn how a tokenizer works and how to train your own tokenizer. As you advance, you'll explore the architecture of autoencoding models, such as BERT, and autoregressive models, such as GPT. You'll see how to train and fine-tune models for a variety of natural language understanding (NLU) and natural language generation (NLG) problems, including text classification, token classification, and text representation. This book also helps you to learn efficient models for challenging problems, such as long-context NLP tasks with limited computational capacity. You'll also work with multilingual and cross-lingual problems, optimize models by monitoring their performance, and discover how to deconstruct these models for interpretability and explainability. Finally, you'll be able to deploy your transformer models in a production environment. By the end of this NLP book, you'll have learned how to use Transformers to solve advanced NLP problems using advanced models.
Table of Contents (16 chapters)
1
Section 1: Introduction – Recent Developments in the Field, Installations, and Hello World Applications
4
Section 2: Transformer Models – From Autoencoding to Autoregressive Models
10
Section 3: Advanced Topics

Cross-lingual zero-shot learning

In previous sections, you learned how to perform zero-shot text classification using monolingual models. Using XLM-R for multilingual and cross-lingual zero-shot classification is identical to the approach and code used previously, so we will use mT5 here.

mT5, which is a massively multilingual pre-trained language model, is based on the encoder-decoder architecture of Transformers and is also identical to T5. T5 is pre-trained on English and mT5 is trained on 101 languages from Multilingual Common Crawl (mC4).

The fine-tuned version of mT5 on the XNLI dataset is available from the HuggingFace repository (https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli).

The T5 model and its variant, mT5, is a completely text-to-text model, which means it will produce text for any task it is given, even if the task is classification or NLI. So, in the case of inferring this model, extra steps are required. We'll take the...