Book Image

Pretrain Vision and Large Language Models in Python

By : Emily Webber
4.5 (2)
Book Image

Pretrain Vision and Large Language Models in Python

4.5 (2)
By: Emily Webber

Overview of this book

Foundation models have forever changed machine learning. From BERT to ChatGPT, CLIP to Stable Diffusion, when billions of parameters are combined with large datasets and hundreds to thousands of GPUs, the result is nothing short of record-breaking. The recommendations, advice, and code samples in this book will help you pretrain and fine-tune your own foundation models from scratch on AWS and Amazon SageMaker, while applying them to hundreds of use cases across your organization. With advice from seasoned AWS and machine learning expert Emily Webber, this book helps you learn everything you need to go from project ideation to dataset preparation, training, evaluation, and deployment for large language, vision, and multimodal models. With step-by-step explanations of essential concepts and practical examples, you’ll go from mastering the concept of pretraining to preparing your dataset and model, configuring your environment, training, fine-tuning, evaluating, deploying, and optimizing your foundation models. You will learn how to apply the scaling laws to distributing your model and dataset over multiple GPUs, remove bias, achieve high throughput, and build deployment pipelines. By the end of this book, you’ll be well equipped to embark on your own project to pretrain and fine-tune the foundation models of the future.
Table of Contents (23 chapters)
1
Part 1: Before Pretraining
5
Part 2: Configure Your Environment
9
Part 3: Train Your Model
13
Part 4: Evaluate Your Model
17
Part 5: Deploy Your Model

Creating embeddings – tokenizers and other key steps for smart features

Now that you have your data loader tested, built, and possibly scaled, you’re thinking to yourself, what do I do with all of these raw images and/or natural language strings? Do I throw them straight into my neural network? Actually, the last five years of learning representations have proven this definitively: no, you should not put raw images or text into your neural network right off the bat. You should convert your raw inputs to embeddings by using another model.

The intuition for this is simple: before you teach your model how to recognize relationships in your dataset, you first have to introduce it to the concept of a dataset. Creating embeddings is basically a way of doing this; you use a data structure that has been trained from another process to create vector representations of your data. That is to say, you provide your raw text and images as input, and you get high-dimensional vectors...