Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Mastering Transformers
  • Table Of Contents Toc
Mastering Transformers

Mastering Transformers - Second Edition

By : Savaş Yıldırım, Meysam Asgari- Chenaghlu
5 (5)
close
close
Mastering Transformers

Mastering Transformers

5 (5)
By: Savaş Yıldırım, Meysam Asgari- Chenaghlu

Overview of this book

Transformer-based language models such as BERT, T5, GPT, DALL-E, and ChatGPT have dominated NLP studies and become a new paradigm. Thanks to their accurate and fast fine-tuning capabilities, transformer-based language models have been able to outperform traditional machine learning-based approaches for many challenging natural language understanding (NLU) problems. Aside from NLP, a fast-growing area in multimodal learning and generative AI has recently been established, showing promising results. Mastering Transformers will help you understand and implement multimodal solutions, including text-to-image. Computer vision solutions that are based on transformers are also explained in the book. You’ll get started by understanding various transformer models before learning how to train different autoregressive language models such as GPT and XLNet. The book will also get you up to speed with boosting model performance, as well as tracking model training using the TensorBoard toolkit. In the later chapters, you’ll focus on using vision transformers to solve computer vision problems. Finally, you’ll discover how to harness the power of transformers to model time series data and for predicting. By the end of this transformers book, you’ll have an understanding of transformer models and how to use them to solve challenges in NLP and CV.
Table of Contents (25 chapters)
close
close
1
Part 1: Recent Developments in the Field, Installations, and Hello World Applications
4
Part 2: Transformer Models: From Autoencoders to Autoregressive Models
12
Part 3: Advanced Topics
19
Part 4: Transformers beyond NLP

Multimodal learning

Multimodal learning is a general topic in AI that refers to solutions where the associated data is not in a single modality (only image, only text, etc.) but instead, more than one modality is involved. As an example, consider a problem where both an image and text are involved as input or output. Another example can be a cross-modality problem where the input and output modalities are not the same.

Before jumping into multimodal learning using Transformers, it is useful to describe how they can be used for images as well. Transformers get the input in the form of a sequence but, unlike text, images are not 1D sequences. One of the approaches in this field tries to convert the image into patches. Each patch is linearly projected into a vector shape and positional encoding is applied.

Figure 1.15 shows the architecture of the Vision Transformer (ViT) and how it works:

Figure 1.15 – Vision Transformer (https://ai.googleblog.com/2020/12/transformers-for-image-recognition-at.html)

Figure 1.15 – Vision Transformer (https://ai.googleblog.com/2020/12/transformers-for-image-recognition-at.html)

Like architectures such as BERT, a classification head can be applied for tasks such as image classification. However, other use cases and applications can be drawn from this approach as well.

Using a Transformer for images or text separately can create a nice model that can understand text or images. But if we want to have a model that can understand both at the same time and create a link between text and images, it would require training both with constraints. Contrastive Language–Image Pre-training (CLIP) is one of the models that can understand images and text. It can be used for semantic search where the input can be a text/image and the output is a text/image.

The next figure shows how the CLIP model is trained by using a dual encoder:

Figure 1.16 – CLIP model contrastive pre-training (https://openai.com/blog/clip/)

Figure 1.16 – CLIP model contrastive pre-training (https://openai.com/blog/clip/)

As it is clear from the CLIP architecture, it will be very useful for zero-shot prediction for text and image modalities. DALL-E and diffusion-based models such as Stable Diffusion are in this category.

The Stable Diffusion pipeline is shown in the next figure:

Figure 1.17 – Stable Diffusion pipeline

Figure 1.17 – Stable Diffusion pipeline

The preceding diagram can also be viewed at https://www.tensorflow.org/tutorials/generative/generate_images_with_stable_diffusion, and the license is as follows: https://creativecommons.org/licenses/by/4.0/.

For example, Stable Diffusion uses a text encoder to convert text into dense vectors, and, accordingly, a diffusion model tries to create a vector representation of the respective image. The decoder tries to decode this vector representation, and finally, an image with semantic similarity to the text input is produced.

Multimodal learning not only helps us use different modalities for tasks that are always related to image-text but it can also be used in many different modalities combined with text, such as speech, numerical data, and graphs.

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Mastering Transformers
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon