Prompt Engineering
Language models are great for generating text. However, we’d like to make sure the text adheres to our needs as developers. In many use cases, we’d like to feed the model with some input text and get an output text that satisfies the product need. To do so, you need to engineer the perfect prompt.
To be more concrete, imagine you're building a writing platform that helps your users to write easier in their own voice. One of the main features is paraphrasing: given a sentence, your app should automatically suggest paraphrases of the same sentence that are more fluent and customized to the writer's voice (BTW, if you do happen to build this kind of app and need such a feature - head on over to our Rewrite API).
You’d like to use AI21 Studio language models to automatically generate the paraphrased suggestions. You can think of the models as smart autocompletion machinery: you feed in text, and the model will generate text that seems like a natural continuation. Your job is to make sure that the desired output (paraphrases, in this example) are the most natural continuation. In other words, you need to create the perfect input to the model, also known as a prompt. So, what are your tools to make that happen?
0️⃣ Zero-shot prompts
This is the simplest approach. Simply format your input prompt into a template that screams out what the completion should be:
[INPUT_TEXT]
Rewrite the sentence above:
Let's put a sentence (by Oscar Wilde) into the template and see what we get:
Be yourself; everyone else is already taken.
Rewrite the sentence above:
Feeding this text into the model, you’ll get a nice completion:
Be yourself; you are the only one who can live your life.
🪄 Tips & Tricks
- Make sure the text and the output are distinguishable. Did you notice the empty line between “[INPUT_TEXT]” and “Rewrite the sentence above:”? 😉 So did the model. This will make it easier for the model to understand what to do.
- Sometimes you'll be able to get better results if you specify the task twice - both at the beginning and at the end. In our case, you could use this template:
Here's a sentence, followed by a paraphrase of it:
[INPUT_TEXT]
A paraphrase of the sentence:
✍ Few-shot prompts
Look again at the output we got earlier. Maybe the new sentence isn’t exactly what you imagined, and you want a different one. This can be challenging to specify in the prompt.. or maybe it’s easy, but the model doesn't exactly meet your requirements. Suppose your user requires a formal tone. Simply adjusting the instruction to reflect this may not be enough, and the definition of formal may vary.
A powerful solution would be to use the few-shot approach. Here, you’ll need to gather a few examples of <input text, output text> pairs that adhere to your product needs. You should format the pairs into a template such that the desired output would be the most natural continuation. In our case, a suitable template would look like this (assuming you have two example pairs):
[INPUT_TEXT_1]
Rewrite the sentence above to sound more formal:
[OUTPUT_TEXT_1]
###
[INPUT_TEXT_2]
Rewrite the sentence above to sound more formal:
[OUTPUT_TEXT_2]
###
[INPUT_TEXT_3]
Rewrite the sentence above to sound more formal:
To be more concrete, let's get examples of <input text, output text> that look like what we want, and format it into the template. Notice that we also add the input text provided by the user, but without a matching output text. We’ll get this text that we’ll feed into the model:
You only live once, but if you do it right, once is enough.
Rewrite the sentence above to sound more formal:
One life is all you get, but if you live it properly, you can live it for a lifetime.
###
If you tell the truth, you don't have to remember anything.
Rewrite the sentence above to sound more formal:
It is not necessary to remember anything if you tell the truth.
###
Be yourself; everyone else is already taken.
Rewrite the sentence above to sound more formal:
And here’s the generated result:
Do not pretend to be someone else; be yourself.
Seems that the model has captured the tone from the examples we provided. You can try it yourself in the playground.
Another advantage of the few-shot approach is that we can tell the model when to stop. Did you notice the ### at the end of every rewrite? Besides being a separator between the examples, which helps the model to distinguish between them, it also helps us in knowing when the model should stop generating more text. You can read more about stop sequences here.
🪄 Tips & Tricks
- Make sure the example inputs you gather are diverse. In the case of paraphrasing, if all the inputs are from the same domain (let’s say tech), the model might be worse when given an input text from a different domain, such as medical.
- When using AI21 Studio for a classification task, make sure the outputs are balanced. For example, if instead of generating a paraphrase, you’d like to predict a topic out of 5 possible topics, make sure all topics appear in the examples you provide, in a similar frequency.
- Moreover, make sure they’re shuffled: if all the preceding examples have the same topic, the model will be more likely to predict that topic, even if it’s not the right one.
Want to dive deeper to prompt engineering? Read our blog post.
Updated about 1 month ago