Python SDK
The AI21 Python SDK enables access to all of AI21 Studio's models and tools. We recommend using the SDK rather than making direct REST calls because the SDK provides autocompletion, documentation, and simplified usage for synchronous and asynchronous calls.
The SDK is located in this GitHub Repository , which includes a full usage guide and lots of examples.
Basic usage
Installation
To use AI21's SDK, install it as shown below. (The command below updates older SDK versions to the latest version).
$ pip install -U "ai21>=2.13.0"
Get an API key
Before you can start using the SDK, you'll need to obtain your API key from AI21 Studio. Understand your free usage quota as well as usage rates after free usage expires.
Using the SDK
The SDK wraps our REST API and provides corresponding methods to call each API endpoint. You should glance over our API documentation to understand all the endpoints and what they do. Don't worry--it's not a very long list π
Class naming for various methods generally follows the REST URL path naming for models; for utilities, such as the tokenizer or RAG library management, the class naming might be different. You can see SDK usage examples in the reference pages.
The following SDK example calls the Jamba 1.5 Large endpoint in question-answering mode (that is, a single user
message with a command or question):
from ai21 import AI21Client
from ai21.models.chat import UserMessage
# One way of passing your key to the client.
client = AI21Client(api_key="<YOUR_API_KEY>)
# Another way to set your key is by setting the AI21_API_KEY
# environment variable to your key value. The default value
# of api_key in the constructor is os.environ["AI21_API_KEY"]. So:
# import os
# os.environ["AI21_API_KEY"] = <YOUR_API_KEY>
# client = AI21Client();
def single_message_instruct():
messages = [
UserMessage(
content="Tell me something I don't know. Limit the response to 30 words maximum."
)
]
response = client.chat.completions.create(
model="jamba-1.5-large",
messages=messages,
top_p=1.0 # Setting to 1 encourages different responses each call.
)
print(response.to_json())
single_message_instruct()
Explanation
-
Start by creating an
AI21Client
. Provide your API key on creation either by passing in theapi_key
parameter explicitly, or by settingos.environ["AI21_API_KEY"]
to the value of your key. -
Call the
create
method that corresponds to the desired REST endpoint . Eachcreate
method has a nested class hierarchy that mirrors the URL path of the corresponding REST call, after the version segment. So, for example:
REST URL | Corresponding create method | Method |
---|---|---|
/studio/v1/library/answer | client.library.answer.create(...) | RAG Engine contextual answer |
/studio/v1/answer | client.answer.create(...) | Single document contextual answer |
/studio/v1/chat/completions | client.chat.completions.create(...) | Jamba Instruct chat/completion |
... | ... | ... |
- The
create
method makes the call and returns a response.
Next steps
The project readme on GitHub includes extensive information about usage, sync and async usage, error handling, streaming, and using on various cloud platforms such as Amazon SageMaker and Amazon Bedrock.
You can find lots of examples in the GitHub examples folder. You can also find basic SDK usage shown in the REST documentation for each endpoint.
Updated 4 months ago