Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Mastering Transformers
  • Table Of Contents Toc
Mastering Transformers

Mastering Transformers - Second Edition

By : Savaş Yıldırım, Meysam Asgari- Chenaghlu
5 (5)
close
close
Mastering Transformers

Mastering Transformers

5 (5)
By: Savaş Yıldırım, Meysam Asgari- Chenaghlu

Overview of this book

Transformer-based language models such as BERT, T5, GPT, DALL-E, and ChatGPT have dominated NLP studies and become a new paradigm. Thanks to their accurate and fast fine-tuning capabilities, transformer-based language models have been able to outperform traditional machine learning-based approaches for many challenging natural language understanding (NLU) problems. Aside from NLP, a fast-growing area in multimodal learning and generative AI has recently been established, showing promising results. Mastering Transformers will help you understand and implement multimodal solutions, including text-to-image. Computer vision solutions that are based on transformers are also explained in the book. You’ll get started by understanding various transformer models before learning how to train different autoregressive language models such as GPT and XLNet. The book will also get you up to speed with boosting model performance, as well as tracking model training using the TensorBoard toolkit. In the later chapters, you’ll focus on using vision transformers to solve computer vision problems. Finally, you’ll discover how to harness the power of transformers to model time series data and for predicting. By the end of this transformers book, you’ll have an understanding of transformer models and how to use them to solve challenges in NLP and CV.
Table of Contents (25 chapters)
close
close
1
Part 1: Recent Developments in the Field, Installations, and Hello World Applications
4
Part 2: Transformer Models: From Autoencoders to Autoregressive Models
12
Part 3: Advanced Topics
19
Part 4: Transformers beyond NLP

Working with tokenization algorithms

In the opening part of the chapter, we trained the BERT model using a specific tokenizer, namely BertWordPieceTokenizer. Now, it is worth discussing the tokenization process in detail here. Tokenization is a way of splitting textual input into tokens and assigning an identifier to each token before feeding the neural network architecture. The most intuitive way to do this is to split the sequence into smaller chunks in terms of space. However, such approaches do not meet the requirements of some languages, such as Japanese, and also may lead to huge vocabulary problems. Almost all Transformer models leverage subword tokenization not only for reducing dimensionality but also for encoding rare (or unknown) words not seen in training. The tokenization relies on the idea that every word, including rare words or unknown words, can be decomposed into meaningful smaller chunks that are widely seen symbols in the training corpus.

Some traditional tokenizers...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Mastering Transformers
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon