Book Image

Pretrain Vision and Large Language Models in Python

By : Emily Webber
4.5 (2)
Book Image

Pretrain Vision and Large Language Models in Python

4.5 (2)
By: Emily Webber

Overview of this book

Foundation models have forever changed machine learning. From BERT to ChatGPT, CLIP to Stable Diffusion, when billions of parameters are combined with large datasets and hundreds to thousands of GPUs, the result is nothing short of record-breaking. The recommendations, advice, and code samples in this book will help you pretrain and fine-tune your own foundation models from scratch on AWS and Amazon SageMaker, while applying them to hundreds of use cases across your organization. With advice from seasoned AWS and machine learning expert Emily Webber, this book helps you learn everything you need to go from project ideation to dataset preparation, training, evaluation, and deployment for large language, vision, and multimodal models. With step-by-step explanations of essential concepts and practical examples, you’ll go from mastering the concept of pretraining to preparing your dataset and model, configuring your environment, training, fine-tuning, evaluating, deploying, and optimizing your foundation models. You will learn how to apply the scaling laws to distributing your model and dataset over multiple GPUs, remove bias, achieve high throughput, and build deployment pipelines. By the end of this book, you’ll be well equipped to embark on your own project to pretrain and fine-tune the foundation models of the future.
Table of Contents (23 chapters)
1
Part 1: Before Pretraining
5
Part 2: Configure Your Environment
9
Part 3: Train Your Model
13
Part 4: Evaluate Your Model
17
Part 5: Deploy Your Model

Reinforcement learning from human feedback

At least two things are undeniable about ChatGPT. First, its launch was incredibly buzzy. If you follow ML topics on social and general media, you probably remember being overloaded with content about people using it for everything from writing new recipes to start-up growth plans, and from website code to Python data analysis tips. However, there’s a good reason for the buzz. It’s actually so much better in terms of performance than any other prompt-based NLP solution the world has seen before. It establishes a new state of the art in question answering, text generation, classification, and so many other domains. It’s so good, in some cases it’s even better than a basic Google search! How did they do this? RLHF is the answer!

While RLHF is not a new concept in and of itself, certainly the most obviously successful application of RLHF in the large language model domain is ChatGPT. The predecessor to ChatGPT was...