Python SDK

The AI21 Python SDK enables access to all of AI21 Studio's models and tools. We recommend using the SDK rather than making direct REST calls because the SDK provides autocompletion, documentation, and simplified usage for synchronous and asynchronous calls.

The SDK is located in this GitHub Repository , which includes an extensive usage guide and an examples folder.

Basic usage

Installation

To use AI21's SDK, install it as shown below. (The command below updates older SDK versions to the latest version).

$ pip install -U "ai21>=2.2.0"

Get an API key

Before you can start using the SDK, you'll need to obtain your API key from AI21 Studio. Understand your free usage quota as well as usage rates after free usage expires.

Using the SDK

The SDK wraps our REST API and provides corresponding methods to call each API endpoint. You should glance over our API documentation to understand all the endpoints and what they do. Don't worry--it's not a very long list 🙂

Class naming for various methods generally follows the REST URL path naming for models; for utilities, such as the tokenizer or RAG library management, the class naming might be different. You can see SDK usage examples in the reference pages.

The following SDK example calls the Jamba Instruct endpoint in question-answering mode (that is, a single user message with a command or question):

from ai21 import AI21Client
from ai21.models.chat import ChatMessage

# One way of passing your key to the client.
client = AI21Client(api_key="<YOUR_API_KEY>)

# Another way to set your key is by setting the AI21_API_KEY
# environment variable to your key value. The default value
# of api_key in the constructor is os.environ["AI21_API_KEY"]. So:
# import os
# os.environ["AI21_API_KEY"] =  <YOUR_API_KEY>
# client = AI21Client();

def single_message_instruct():
    messages = [
        ChatMessage(
            role="user",
            content="Tell me something I don't know. Limit the response to 30 words maximum."
        )
    ]

    response = client.chat.completions.create(
        model="jamba-instruct-preview",
        messages=messages,
        top_p=1.0 # Setting to 1 encourages different responses each call.
    )
    print(response.to_json())

single_message_instruct()

Explanation

  1. Start by creating an AI21Client . Provide your API key on creation either by passing in the api_key parameter explicitly, or by setting os.environ["AI21_API_KEY"] to the value of your key.

  2. Call the create method that corresponds to the desired REST endpoint . Each create method has a nested class hierarchy that mirrors the URL path of the corresponding REST call, after the version segment. So, for example:

REST URLCorresponding create methodMethod
/studio/v1/library/answerclient.library.answer.create(...)RAG Engine contextual answer
/studio/v1/answerclient.answer.create(...)Single document contextual answer
/studio/v1/chat/completionsclient.chat.completions.create(...)Jamba Instruct chat/completion
.........
  1. The create method makes the call and returns a response.

Next steps

The project readme on GitHub includes extensive information about usage, sync and async usage, error handling, streaming, and using on various cloud platforms such as Amazon SageMaker and Amazon Bedrock.

You can find lots of examples in the GitHub examples folder. You can also find basic SDK usage shown in the REST documentation for each endpoint.


What’s Next