Jurassic-2 (J2) is our top-notch series of state-of-the-art Large Language Models. As the new generation following Jurassic-1, J2 not only improves upon the previous series in every aspect, but it also offers new features and capabilities that put it in a league of its own.
The Jurassic-2 models are available in three sizes:
As the largest and most powerful model in the Jurassic series, J2-Ultra is an ideal choice for the most complex language processing tasks and generative text applications. Further, the model can be fine-tuned for optimum performance in any custom application.
This model offers enhanced text generation capabilities, making it well-suited to language tasks with a greater degree of complexity. Its fine-tuning options allow for optimization of quality, while maintaining an affordable price and high efficiency.
Designed for fast responses, this model can be fine-tuned to optimize performance for relatively simple tasks, making it an ideal choice for language processing tasks that require maximum affordability and less processing power.
In addition to top quality performance in English, all of J2 models support several non-English languages, including:
All J2 models were trained on a massive corpus of text, making them highly versatile general purpose text-generators, capable of composing human-like text and solving complex tasks such as question answering, text classification and many others.
J2 models can be applied to virtually any language task by crafting a suitable prompt, containing a description of the task and/or a few examples, a process commonly known as prompt engineering. Popular use-cases include generating marketing copy, powering chatbots and assisting creative writing.
With trial and error, you should be able to bootstrap a prompt that produces good results for your use-case. However, to achieve even better quality and scale-up your app, we recommend that you train a custom model.
You can generate a text completion for a given text prompt by using our Python SDK or posting an HTTP request to the complete endpoint corresponding to the desired language model to use. The request contains the input text, called a prompt and various parameters controlling the generation. For authentication, you must include your API key in the request headers. A complete response contains the tokenized prompt, the generated text(s), called completion(s), and various metadata.
The request and response specifications are documented in full here.
Updated 8 months ago