Large Language Models
Introduction to the core of our product
Natural language processing (NLP) has seen rapid growth in the last few years since large language models (LLMs) were introduced. Those huge models are based on the Transformers architecture, which allows for the training of much larger and more powerful language models.
We divide LLMs into two main categories: Autoregressive and Masked LMs (language models). On this page, we will focus on Autoregressive LLMs, as our language models, the Jurassic-1 series, belong to this category.
⚡ The task: predict the next word
Autoregressive LLM is a neural network model composed from billions of parameters. It was trained on a massive amount of texts with one goal: to predict the next word, based on the given text. By repeating this action several times, every time adding the prediction word to the provided text, you will end up with a complete text (e.g., full sentences, paragraphs, articles, books, and more). In terms of terminology, the textual output (the complete text) is called a “completion”, while the input (the given, original text) is called a “prompt”.
🎓 Added value: knowledge acquisition
Imagine you had to read all of Shakespeare's works repeatedly to learn a language. Eventually, you would be able to not only memorize all of his plays and poems, but also imitate his writing style.
In similar fashion, we trained the LLMs by supplying them with many textual sources. This has enabled them to gain an in-depth understanding of English, as well as general knowledge.
🗣️ Interacting with Large Language Models
The LLMs are queried using natural language, also known as prompt engineering.
Rather than writing lines of code and loading a model, you write a natural language prompt and pass it to the model as the input. For example:

⚙️ Resource-intensive
Data, computation, and engineering resources are required for training and deploying large language models. LLMs, such as our Jurassic-1 models, play an important role here, providing access to this type of technology to academic researchers and developers.
Updated about 1 month ago