Responsible use

AI21 Studio provides open access to state-of-the-art language models that can be used to power a large variety of useful applications. We believe it is important to ensure that this technology is used in a responsible way, while allowing developers the freedom they need to experiment rapidly and deploy solutions at scale.

In order to use AI21 Studio, you are required to comply with our Terms of Use and with the following usage guidelines. Provided you comply with these requirements, you may use AI21 Studio to power applications with live users without any additional approval. We reserve the right to limit or suspend your access to AI21 Studio at any time where we believe these terms or guidelines are violated.

Please check these usage guidelines periodically, as they may be updated from time to time. For any questions, clarifications or concerns, please contact [email protected].

Usage Guidelines

  1. AI21 Studio must not be used for any of the following activities:

    1. Illegal activities, such as hate speech, gambling, child pornography or violating intellectual property rights;
    2. Harassment, victimization, intimidation, fraud or spam;
    3. Creation or dissemination of misinformation, promotion of self-harm, glorification of violent events or incitement of violence.
  2. Your application may present content generated by AI21 Studio directly to humans (e.g., chatbots, content generation tools, etc). In this case, you are required to ensure the following:

    1. No content generated by AI21 Studio will be posted automatically (without human intervention) to any public website or platform where it may be viewed by an audience greater than 100 people.

      📘

      Example

      This means that you can use AI21 Studio to build a bot for your team’s 7-person Slack channel. In contrast, you are not allowed to build a Twitter bot, unless each tweet is checked by a human before it is posted. You can build a customer service bot that interacts with any number of customers, assuming it's chatting with each human separately in a 1:1 conversation.

    2. In any case, the first human to view text generated by AI21 Studio must not be led to believe that it was written by a human.

      📘

      Example

      If you’re building a copywriting tool for marketing professionals, your users must be informed that the text proposed to them is machine generated. They are then free to use and present it as their own, at their discretion. As another example, if you’re building a chatbot, it must be clear to your users that they are corresponding with a machine rather than a live human.

    3. Language models such as those accessible via AI21 Studio may generate inappropriate, biased, offensive or otherwise harmful content (see our technical paper for an evaluation of bias in our models). If your application is used by more than 100 people per month, you must provide a method for users to report generated text as harmful. You should monitor these reports and respond to them appropriately.

      📘

      Example

      You can build a demo, launch a closed beta, etc. without any special requirements, as long as it is accessed by fewer than 100 users per month. Once you exceed 100 monthly users, you must implement a “flag as inappropriate” button or some similar functionality to collect negative feedback.

  3. Except when using custom models, the prompt text for any completion request must contain at least 60 characters of text (about 10 words) not written by your users. This text should be crafted by you to produce the desired functionality for the user.

  4. Language models such as those accessible via AI21 Studio can generate content that is biased against particular groups of people. You may not use AI21 Studio to power automated decision making where individuals may be denied benefits, refused access to a service or otherwise have their wellbeing substantially harmed based on protected characteristics.

  5. AI21 Studio must not be used to classify or profile people based on protected characteristics (like racial or ethnic background, religion, political views, or private health data).