AI21 Studio Python SDK Guide

This SDK enables interaction with all of AI21 Studio's models and services, including Jurassic-2 and Task-Specific Models.

You can find the Home Page of our SDK in this Github Repository

Set up

To get started with AI21's SDK, you'll need to install it first. To do so, simply run the following command:

$ pip install -U "ai21>=2.2.0"

API Keys

Before you can start using the SDK, you'll need to obtain your API key from AI21 Studio. Once you have your API key, you can set it globally in your Python code like this:

import os
os.environ["AI21_API_KEY"] =  <YOUR_API_KEY>
 
# Another way is to pass it to the client init

from ai21 import AI21Client
client = AI21Client(api_key="<YOUR_API_KEY>")
import ai21
ai21.api_key = <YOUR_API_KEY>

Building a Prompt

To generate text using Jurassic-2 language models, you need to provide a prompt for the model to complete. We recommend following our Prompt Engineering best practices to achieve the best results.

Here are a couple of examples of prompts you can use to get started:

"Zero Shot" prompt:

prompt = "These are a few of my favorite"

"Few Shot" prompt:

prompt = """
Complete the following sentence.
Sentence: These are a few of my favorite
Completion: things. Cream-colored ponies and crisp apple strudels.

##

Complete the following sentence.
Sentence: These are a few of my favorite
Completion: rock bands: The Beatles, The Doors, Fleetwood Mac.

##

Complete the following sentence.
Sentence: These are a few of my favorite
Completion:
"""

Chat

Chat - Jamba Models

from ai21 import AI21Client
from ai21.models.chat import ChatMessage

client = AI21Client(
    # defaults to os.enviorn.get('AI21_API_KEY')
    api_key='my_api_key',
)

messages = [
    ChatMessage(
        content="Hello, this is a test",
        role="user",	
    ),
]

response = client.chat.completions.create(
  model="jamba-instruct-preview",
  messages=messages,
  max_tokens=1024,
)

For more details on Jamba Instruct (preview) and its parameters - check out the documentation

Chat - J2 Models

from ai21 import AI21Client
from ai21.models import RoleType, Penalty
from ai21.models import ChatMessage

system = "You're a support engineer in a SaaS company"
messages = [
    ChatMessage(text="Hello, I need help with a signup process.", role=RoleType.USER),
    ChatMessage(text="Hi Alice, I can help you with that. What seems to be the problem?", role=RoleType.ASSISTANT),
    ChatMessage(text="I am having trouble signing up for your product with my Google account.", role=RoleType.USER),
]

client = AI21Client(
    # defaults to os.enviorn.get('AI21_API_KEY')
    api_key='my_api_key',
)
response = client.chat.create(
    system=system,
    messages=messages,
    model="j2-ultra",
)

Jurassic-2

Jurassic-2 with Python SDK Code Example

Below is a sample usage of the AI21 Python SDK to interact with Jurassic-2 models, demonstrating some of the frequently utilized parameters:

import os

from ai21 import AI21Client

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))

# J2 Mid
response_mid = client.completion.create(
  model="j2-mid",
  prompt="These are a few of my favorite",
  num_results=1,
  max_tokens=2,
  temperature=0.4,
  top_k_return=0,
  top_p=1,
  stop_sequences=["##"]
)

print(response_mid)
import ai21
ai21.api_key = <YOUR_API_KEY>

# J2 Mid
response_mid = ai21.Completion.execute(
  model="j2-mid",
  prompt="These are a few of my favorite",
  numResults=1,
  maxTokens=2,
  temperature=0.4,
  topKReturn=0,
  topP=1,
  stopSequences=["##"]
)

print(response_mid)

By customizing these parameters, you can control the content and style of the generated text. For a full list of available options, check out our Complete API page.

All Jurassic-2 models can be interacted with using the same ai21.Completion.execute() function, including j2-light, j2-mid, and j2-ultra.

Response

Here's an example of a response object from executing a j2-mid model:

{
   "id":"94078cb6-687e-4262-ef8f-1d7c2b0dbd2b",
   "prompt":{
      "text":"These are a few of my favorite",
      "tokens":[
         {
            "generatedToken":{
               "token":"▁These▁are",
               "logprob":-8.824776649475098,
               "raw_logprob":-8.824776649475098
            },
            "topTokens":"None",
            "textRange":{
               "start":0,
               "end":9
            }
         },
         {
            "generatedToken":{
               "token":"▁a▁few",
               "logprob":-4.798709869384766,
               "raw_logprob":-4.798709869384766
            },
            "topTokens":"None",
            "textRange":{
               "start":9,
               "end":15
            }
         },
         {
            "generatedToken":{
               "token":"▁of▁my▁favorite",
               "logprob":-1.0864331722259521,
               "raw_logprob":-1.0864331722259521
            },
            "topTokens":"None",
            "textRange":{
               "start":15,
               "end":30
            }
         }
      ]
   },
   "completions":[
      {
         "data":{
            "text":" things –",
            "tokens":[
               {
                  "generatedToken":{
                     "token":"▁things",
                     "logprob":-0.0003219324571546167,
                     "raw_logprob":-0.47372230887413025
                  },
                  "topTokens":"None",
                  "textRange":{
                     "start":0,
                     "end":7
                  }
               },
               {
                  "generatedToken":{
                     "token":"▁–",
                     "logprob":-7.797079563140869,
                     "raw_logprob":-4.319167613983154
                  },
                  "topTokens":"None",
                  "textRange":{
                     "start":7,
                     "end":9
                  }
               }
            ]
         },
         "finishReason":{
            "reason":"length",
            "length":2
         }
      }
   ]
}

The response is a nested data structure containing information about the processed request, prompt, and completions. At the top level, the response has the following fields:

ID

A unique string id for the processed request. Repeated identical requests receive different IDs.

prompt

The prompt includes the raw text, the tokens with their log probabilities, and the top-K alternative tokens at each position, if requested. It has two nested fields:

  • text (string)
  • tokens (list of TokenData)

completions

A list of completions, including raw text, tokens, and log probabilities. The number of completions corresponds to the requested numResults. Each completion has two nested fields:

  • data, which contains the text (string) and tokens (list of TokenData) for the completion.
  • finishReason, a nested data structure describing the reason generation was terminated for this completion.

TokenData

The TokenData object provides detailed information about each token in both the prompt and the completions. It includes the following fields:

generatedToken:

The generatedToken field consists of two nested fields:

  • token: The string representation of the token.
  • logprob: The predicted log probability of the token after applying the sampling parameters as a float value.
  • raw_logprob: The raw predicted log probability of the token as a float value. For the indifferent values (namely, temperature=1, topP=1) we get raw_logprob=logprob.

topTokens

The topTokens field is a list of the top K alternative tokens for this position, sorted by probability, according to the topKReturn request parameter. If topKReturn is set to 0, this field will be null.

Each token in the list includes:

  • token: The string representation of the alternative token.
  • logprob: The predicted log probability of the alternative token as a float value.

textRange

The textRange field indicates the start and end offsets of the token in the decoded text string:

  • start: The starting index of the token in the decoded text string.
  • end: The ending index of the token in the decoded text string.

Task-Specific Models

Contextual Answers API with Python SDK Code Example

from ai21 import AI21Client

CONTEXT = """
In 2020 and 2021, enormous QE — approximately $4.4 trillion, or 18%, of 2021 gross
domestic product (GDP) — and enormous fiscal stimulus (which has been and
always will be inflationary) — approximately $5 trillion, or 21%, of 2021 GDP
— stabilized markets and allowed companies to raise enormous amounts of
capital. In addition, this infusion of capital saved many small businesses and
put more than $2.5 trillion in the hands of consumers and almost $1 trillion into
state and local coffers. These actions led to a rapid decline in unemployment, 
dropping from 15% to under 4% in 20 months — the magnitude and speed of which were both
unprecedented. Additionally, the economy grew 7% in 2021 despite the arrival of
the Delta and Omicron variants and the global supply chain shortages, which were
largely fueled by the dramatic upswing in consumer spending and the shift in
that spend from services to goods.
"""

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))
response = client.answer.create(
    context=CONTEXT,
    question="Did the economy shrink after the Omicron variant arrived?",
)
print(response)

Paraphrase API with Python SDK Code Example

import os

from ai21 import AI21Client
from ai21.models import ParaphraseStyleType

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))

response = client.paraphrase.create(
  text="You only live once, but if you do it right, once is enough.",
  style=ParaphraseStyleType.GENERAL,
)

print(response.suggestions[0].text)
import ai21
ai21.api_key = <YOUR_API_KEY>

response = ai21.Paraphrase.execute(
  text="You only live once, but if you do it right, once is enough.",
  style="general" 
)

print(response.suggestions[0].text)

Note: This API will return a list of up to 10 suggested paraphrases with every request. Read more here.

Grammatical Error Correction (GEC) API with Python SDK Code Example

import os

from ai21 import AI21Client

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))

response = client.gec.create(
  text="jazzz is a great stile off music",
)

print(response.corrections[0].suggestion)
print(response.corrections[0].startIndex)
print(response.corrections[0].endIndex)
print(response.corrections[0].original_text)
print(response.corrections[0].correction_type)
import ai21
ai21.api_key = <YOUR_API_KEY>

response = ai21.GEC.execute(
  text="jazzz is a great stile off music",
)

print(response.corrections[0].suggestion)
print(response.corrections[0].startIndex)
print(response.corrections[0].endIndex)
print(response.corrections[0]['originalText'])
print(response.corrections[0]['correctionType'])

Note: This API can return 7 different types of grammatical error corrections: Grammar, Missing Word, Punctuation, Spelling, Word Repetition, and Wrong Word. Read more here.

Text Improvements API with Python SDK Code Example

import os

from ai21 import AI21Client
from ai21.models import ImprovementType

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))

response = client.improvements.create(
    text="Affiliated with the profession of project management, I have ameliorated myself with a different set of hard skills as well as soft skills",
		types=[ImprovementType.FLUENCY, ImprovementType.VOCABULARY_SPECIFICITY],
)

print(response.improvements[0].original_text)
print(response.improvements[0].suggestions)
print(response.improvements[0].suggestions[0])
print(response.improvements[0].improvement_type)
print(response.improvements[1].start_index)
print(response.improvements[1].end_index)
import ai21
ai21.api_key = <YOUR_API_KEY>

response = ai21.Improvements.execute(
    text="Affiliated with the profession of project management, I have ameliorated myself with a different set of hard skills as well as soft skills",
    types=["fluency", "vocabulary/specificity"]
)

print(response.improvements[0].originalText)
print(response.improvements[0].suggestions)
print(response.improvements[0].suggestions[0])
print(response.improvements[0].improvementType)
print(response['improvements'][1].startIndex)
print(response['improvements'][1].endIndex)

Note: This API can return as many as 5 different text improvement types: fluency, vocabulary/specificity, vocabulary/variety, clarity/short-sentences, and clarity/conciseness. Read more here.

Summarize API with Python SDK Code Example

import os

from ai21 import AI21Client
from ai21.models import DocumentType

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))

response = client.summarize.create(
  source="https://en.wikipedia.org/wiki/Koala",
  source_type=DocumentType.URL, 
)

print(response.summary)
import ai21
ai21.api_key = <YOUR_API_KEY>

response = ai21.Summarize.execute(
  source="https://en.wikipedia.org/wiki/Koala",
  sourceType="URL" 
)

print(response.summary)

Note: source_type can be either URL or TEXT. Read more here.

Text Segmentation API with Python SDK Code Example

import os

from ai21 import AI21Client
from ai21.models import DocumentType

client = AI21Client(api_key=os.environ.get("AI21_API_KEY"))

response = client.segmentation.create(
  source="https://www.ai21.com/blog/summarizing-legal-documents-for-different-personas-using-ai21-studio",
	source_type=DocumentType.URL,
)

print(response.segments[0].segment_text)
print(response.segments[0].segment_type)
import ai21
ai21.api_key = <YOUR_API_KEY>

response = ai21.Segmentation.execute(
  source="https://www.ai21.com/blog/summarizing-legal-documents-for-different-personas-using-ai21-studio",
  sourceType="URL" 
)

print(response.segments[0].segmentText)
print(response.segments[0].segmentType)

Note: source_type can be either URL or TEXT. Read more here.