Contextual Answers [Document Library]
A ready-to-use question answering engine that taps into your organizational knowledge base.

Contextual Answers is a plug-and-play engine that organizations can seamlessly embed into their digital assets to enhance the efficiency and accuracy of information queries. The engine provides businesses with LLM technology to instantaneously answer user questions about any organizational data. The answers will be based solely on this data, and will be backed by the proper context from the organizational knowledge base.
The Contextual Answers package we are offering is the full solution: a library to store your files, a retrieval mechanism that fetches the most relevant contexts from your organizational knowledge base and a model that provides an answer based on this context.
As an example, let’s leverage these capabilities to create an efficient knowledge management system (KMS). Ready to get started?
Step 1: Upload your files
You can upload your files to your Library, where we offer free storage up to 1 GB (want more? Contact us: [email protected]). In this example, we will upload three documents with company policies (working from abroad, hybrid work guidelines, IT security).
This can be done with a simple call using our Python SDK or an HTTP request:
import ai21
ai21.api_key = YOUR_API_KEY
file_id = ai21.Library.Files.upload(file_path=file_path)
You can also do it via our Studio platform:

You can upload a file as it is, store it in a directory (for those who like working with directories) or add labels. This can help you organize your filing system, while focusing your questions on a subset of documents.
What if my files are not in .txt format?
Currently, we only support uploading files in .txt format. In the case of .pdf files, we recommend using PyPDF2 to convert them to .txt (but perhaps not optimally if there are tables).
Step 2: Ask a question
Your users can now ask a question and immediately get the answer with attribution to the relevant source. The system works as follows:

The question is used as a query for a retrieval mechanism, which searches over the entire knowledge base and retrieves the most relevant contexts.
With rapid changes occurring in work environments lately, a common question from employees is about working remotely:
response = ai21.Library.Answer.execute(question="How many days can I work from home?")
The response will be:
Two days a week
Note that the full response returned from the model also contains the sources used as context (see “sources” field). You can see the full response here.
However, if the answer to the question is not in any of the documents, the model will indicate that by returning an empty response. For instance, if we will ask the following question:
response = ai21.Library.Answer.execute(question="What's my meal allowance when working from home?")
The response will be empty. You can use the field answerInContext in the response to check this. For example, for the question above:
if response.answerInContext:
print(response.answer)
else:
print("The answer is not in the documents")
The code will return The answer is not in the documents.
You can also do it via our Studio platform:

Step 3: Explore more options
If you have a large collection of documents and files, it can be helpful to refine your retrieval process. By adding labels and assigning paths to each document, you can narrow down your process and achieve more accurate results, ultimately saving time. We provide several options for that purpose:
- Search within a specific path in your library: Focus your search on a particular location within your knowledge base.
- Search only for documents with specific labels: Filter your search to include only documents that have been assigned certain labels.
- Search within a designated group of documents: Specify the document IDs of a particular set of files, allowing the model to perform the search exclusively within that group.
The latter option is particularly valuable when you have an existing retrieval mechanism in place and you have an idea of which document(s) might contain the answer, where those documents might be quite lengthy. Since our contextual-answers mechanism operates on document chunks rather than entire documents, you will obtain precise answers quickly and efficiently, without having to process the entire lengthy document.
Updated 2 months ago