Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Transformers for Natural Language Processing and Computer Vision
  • Table Of Contents Toc
Transformers for Natural Language Processing and Computer Vision

Transformers for Natural Language Processing and Computer Vision - Third Edition

By : Denis Rothman
4.2 (35)
close
close
Transformers for Natural Language Processing and Computer Vision

Transformers for Natural Language Processing and Computer Vision

4.2 (35)
By: Denis Rothman

Overview of this book

Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, practical applications, and popular platforms (Hugging Face, OpenAI, and Google Vertex AI) used for Natural Language Processing (NLP) and Computer Vision (CV). The book guides you through a range of transformer architectures from foundation models and generative AI. You’ll pretrain and fine-tune LLMs and work through different use cases, from summarization to question-answering systems leveraging embedding-based search. You'll also implement Retrieval Augmented Generation (RAG) to enhance accuracy and gain greater control over your LLM outputs. Additionally, you’ll understand common LLM risks, such as hallucinations, memorization, and privacy issues, and implement mitigation strategies using moderation models alongside rule-based systems and knowledge integration. Dive into generative vision transformers and multimodal architectures, and build practical applications, such as image and video classification. Go further and combine different models and platforms to build AI solutions and explore AI agent capabilities. This book provides you with an understanding of transformer architectures, including strategies for pretraining, fine-tuning, and LLM best practices.
Table of Contents (25 chapters)
close
close
21
Other Books You May Enjoy
22
Index

Questions

  1. T5 models only have encoder stacks like BERT models. (True/False)
  2. T5 models have both encoder and decoder stacks. (True/False)
  3. T5 models use relative positional encoding, not absolute positional encoding. (True/False)
  4. Text-to-text models are only designed for summarization. (True/False)
  5. Text-to-text models apply a prefix to the input sequence that determines the NLP task. (True/False)
  6. T5 models require specific hyperparameters for each task. (True/False)
  7. One of the advantages of text-to-text models is that they use the same hyperparameters for all NLP tasks. (True/False)
  8. T5 transformers do not contain a feedforward network. (True/False)
  9. Hugging Face is a framework that makes transformers easier to implement. (True/False)
  10. OpenAI’s transformer models are the best for summarization tasks. (True/False)
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Transformers for Natural Language Processing and Computer Vision
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon