Book Image

Modern Generative AI with ChatGPT and OpenAI Models

By : Valentina Alto
4.9 (8)
Book Image

Modern Generative AI with ChatGPT and OpenAI Models

4.9 (8)
By: Valentina Alto

Overview of this book

Generative AI models and AI language models are becoming increasingly popular due to their unparalleled capabilities. This book will provide you with insights into the inner workings of the LLMs and guide you through creating your own language models. You’ll start with an introduction to the field of generative AI, helping you understand how these models are trained to generate new data. Next, you’ll explore use cases where ChatGPT can boost productivity and enhance creativity. You’ll learn how to get the best from your ChatGPT interactions by improving your prompt design and leveraging zero, one, and few-shots learning capabilities. The use cases are divided into clusters of marketers, researchers, and developers, which will help you apply what you learn in this book to your own challenges faster. You’ll also discover enterprise-level scenarios that leverage OpenAI models’ APIs available on Azure infrastructure; both generative models like GPT-3 and embedding models like Ada. For each scenario, you’ll find an end-to-end implementation with Python, using Streamlit as the frontend and the LangChain SDK to facilitate models' integration into your applications. By the end of this book, you’ll be well equipped to use the generative AI field and start using ChatGPT and OpenAI models’ APIs in your own projects.
Table of Contents (17 chapters)
1
Part 1: Fundamentals of Generative AI and GPT Models
4
Part 2: ChatGPT in Action
11
Part 3: OpenAI for Enterprises

Zero-, one-, and few-shot learning – typical of transformers models

In the previous chapters, we mentioned how OpenAI models, and hence also ChatGPT, come in a pre-trained format. They have been trained on a huge amount of data and have had their (billions of) parameters configured accordingly.

However, this doesn’t mean that those models can’t learn anymore. In Chapter 2, we saw that one way to customize an OpenAI model and make it more capable of addressing specific tasks is by fine-tuning.

Definition

Fine-tuning is the process of adapting a pre-trained model to a new task. In fine-tuning, the parameters of the pre-trained model are altered, either by adjusting the existing parameters or by adding new parameters so that they fit the data for the new task. This is done by training the model on a smaller labeled dataset that is specific to the new task. The key idea behind fine-tuning is to leverage the knowledge learned from the pre-trained model and...