-
Book Overview & Buying
-
Table Of Contents
OpenAI API Cookbook
By :
In this recipe, we will learn how to modify the Chat Log and how it impacts the completion response that we receive from the model. This is important because developers often find this to be the best way to fine tune a model, without actually needing to create a new model. This also follows a prompt engineering must-have of providing the model with suitable examples.
We can add examples of prompts and responses to the Chat Log to modify the model’s behavior. Let’s observe this with the following steps:
You are an assistant that creates marketing slogans based on descriptions of companies. Here, we are clearly instructing the model of its role and context.A company that makes ice cream.Select the Add message button located underneath the USER label to add a new message. Ensure that the label of the message says ASSISTANT. If it does not, select the label to toggle between USER and ASSISTANT.
Now, type the following into the ASSISTANT message: Sham - the ice cream that never melts!.
A company that produces comedy movies.Sham - the best way to tickle your funny bone!.A company that provides legal assistance to businesses, and Sham - we know business law!. At this point, you should see the following:
Figure 1.5 – The OpenAI Playground with Chat Logs populated
A company that writes engaging mystery novels.Sham – unravel the secrets with our captivating mysteries!
Yours may be different, but the response you see will definitely start with the word “Sham –” and end with an exclamation point. In this way, we have trained the model to only give us completion responses in that format.
Figure 1.6 – The OpenAI Playground with completion, after changing the Chat Log
As we learned in the Running a completion request in the OpenAI Playground recipe, ChatGPT and its GPT models are built on a transformer architecture, which processes input and generates responses based on the immediate chat history it has been given. It doesn’t have an ongoing memory of past interactions or a stored understanding of context outside the immediate conversation. The Chat Log has a significant impact on the model’s completions. When the model receives a prompt, it takes into account the most recent prompt, the System Message, and all the preceding messages in the Chat Log.
We can observe this in the Playground by providing our own sets of User and Assistant messages, and then see how the model changes its completion, as we did in the preceding steps.
In particular, the model has detected two patterns in the Chat Log and then generated the completion to follow that behavior:
Overall, the Chat Log can be used to train the model to generate certain types of completions that the user wants to create. In addition, the Chat Log helps the model understand and maintain the context of the bigger conversation.
For example, if you added a User message with What is an airplane? and followed it up with another User message of How do they fly?, the model would understand that they refers to the airplane because of the Chat Log.
The Chat Log plays a pivotal role in influencing the model’s completions, and this observation is a glimpse into the broader realm of prompt engineering. Prompt engineering is a technique where the input or prompt given to a model is carefully crafted to guide the model towards producing a desired output.
Within the sphere of prompt engineering, there are a few notable concepts, as follows:
Understanding these nuances in how prompts can be engineered allows users to leverage ChatGPT’s capabilities more effectively, tailoring interactions to their specific needs.
Overall, the Chat Log (and the System Message, as we learned in the earlier recipe) is a great low-touch method of aligning the completion responses from OpenAI to a desired target, without needing to fine-tune the model itself. Now that we’ve used the Playground to test prompts and completions, it’s time to use the actual OpenAI API.
Change the font size
Change margin width
Change background colour