You can generate a text completion for a given text prompt by posting an HTTP request to the complete endpoint corresponding to the desired language model to use. The request contains the input text, called a prompt and various parameters controlling the generation. For authentication, you must include your API key in the request headers. A complete response contains the tokenized prompt, the generated text(s), called completion(s), and various metadata.


Response

The response is a nested data structure as described below. At its top level, the response has the following fields:

id

A unique string id for the processed request. Repeated identical requests get different ids.

prompt

The prompt, including the raw text, the tokens with their logprobs and the top-K alternative tokens at each position, if requested.

Has two nested fields:

completions

List of completions, including raw text, tokens and logprobs. The number of completions corresponds to requested numResults.

Each completions has two nested fields:

  • data, containing text (string) and tokens (list of TokenData) for the completion.
  • finishReason, a nested data structure describing the reason generation was terminated in this completion.

TokenData

Both the prompt and each of the completions provide lists of TokenData, where each entry describes a token and, if requested, its top-K alternatives. An instance of TokenData contains the following fields:

generatedToken

Has two nested fields:

  • token - the string representation of the token.
  • logprob - the predicted log probability of the token (float).

topTokens

A list of the top K alternative tokens for this position, sorted by probability, according to the topKReturn request parameter, or null if topKReturn=0.

Each token in the list has a token (string) field and a logprob (float) field.

textRange

The start and end offsets of this token in the decoded text string.