Prompt Engineering
Learn how to get the most out of your language model
Language models are great for generating text. However, we’d like to make sure the text adheres to our needs as developers. In many use cases, we’d like to feed the model with some input text and get an output text that satisfies the product need. To do so, you need to engineer the perfect prompt.
As an example, let's say you own an online retail platform. Your job requires you to write mass amounts of product descriptions, and you'd like to automate this process as much as possible. You can optimize the process by using AI21 Studio language models. Think of the models as smart autocompletion machines: you feed in text, and the model will generate text that seems natural. Your job is to ensure that the desired output (product descriptions, in this example) are the most natural continuation. In other words, you need to create the right input to the model, also known as a prompt. So, what are your tools to make that happen?
0️⃣ Prompt with instructions (aka Zero-shot prompts)
This is the simplest approach and the one that feels the most natural.
Why? This model was specifically trained to follow natural language instructions. It is the most natural way to interact with large language models: simply tell the model what you want it to do, and it will follow.
When? Drafting, seeking for inspiration, or when the format and guidelines are "work in progress".
How? Just provide an instruction to the model, like this:
Feeding this text into the model, you’ll get a nice completion:
The model can generate nice descriptions based only on instructions, but this is not a realistic business scenario. A more realistic scenario is when you have a product name and some features, and you want a description that highlights those features.
You can modify your prompt by including the specific features and highlights you wish to incorporate:
As you can see, the generated description includes all the features we wanted to highlight:
If you want, you can add more details and guidance to the model in your prompt. For example:
This description seems more aligned with the provided guidelines:
🎵 Notes
- Recency bias: In the third example, we placed the instruction at the end of the prompt, rather than at the beginning. It is due to LLMs' recency bias, where the ends of prompts affect completion more than the beginnings. In longer prompts, it is a good idea to place the instruction at the end.
- Stop sequence: Notice the ## in the third example, which separates the instruction from the features to include? So did the model 😉. Adding a separator between prompt parts can help the model understand the task.
✍ Prompt with examples (aka Few-shot prompts)
Look again at the output we got earlier. Maybe the description isn’t exactly what you imagined, and you want a different one. This can be challenging to specify in the prompt.. or maybe it’s easy, but the model doesn't exactly meet your requirements. A powerful solution would be to use the few-shot approach.
Why? Examples are helpful in assisting the model to comprehend and generate responses that adhere to the intended format.
When? Examples are particularly useful when there are stringent format constraints, a well-defined objective, and an overall structure to be maintained.
How? To establish a pattern for the model to follow, present a few instances (“shots”) of input-output pairs in the prompt. This enables the model to mimic the pattern. Then, provide the input for a query example and allow the model to generate a suitable completion. This approach is commonly referred to as a "few-shot prompt".
In your e-commerce platform, you probably have existing descriptions you perfected over time and want to maintain the same spirit. You can take some of them and format them into a template of input-output pairs. This will create a prefix of a few examples, followed by a query (where all constitute the prompt). This is how the prompt will look:
And here’s the generated result:
Seems that the model has captured the tone from the examples we provided. You can try it yourself in the playground.
Another advantage of the few-shot approach is that we can tell the model when to stop. Did you notice the ## at the end of every description? Besides being a separator between the examples, which helps the model to distinguish between them, it also helps us in knowing when the model should stop generating more text. You can read more about stop sequences here.
🪄 Tips & Tricks
- Check several phrasings: Small differences between prompts can have a large impact on the completion, even if the meaning is the same.
- Change the temperature: This is an effective parameter that controls the creativity-accuracy tradeoff of the generation. You can read more about it here.
- Make sure your examples are varied: With few-shot prompts, the model will be biased to follow the pattern in the examples. If you only provide examples of clothes descriptions and then ask the model to generate a car description, it won't be as effective.
- Use the same structure/template: When using few-shot prompts, always use the same template for every example. Otherwise, the model will not recognize the pattern and you will get suboptimal results.
Updated 13 days ago