Contextual Answers is a plug-and-play solution that organizations can seamlessly embed into their digital assets to enhance the efficiency and accuracy of information queries. The solution provides businesses with LLM technology to instantaneously answer user questions about any organizational data. The answers will be based solely on this data, and will be backed by the proper context from the organizational knowledge base.
The Contextual Answers package we are offering is the full solution: A RAG Engine with built-in mechanisms for every step of the way, including document extraction, chunking, embeddings, vector storage, and a retrieval mechanism that fetches the most relevant contexts from your organizational knowledge base, as well as a model that provides an answer based on this context.
As an example, let’s leverage these capabilities to create an efficient knowledge management system (KMS). Ready to get started?
You can upload your files to the RAG Engine, where we offer free storage up to 1 GB (want more? Contact us: [email protected]). You can also integrate your organization’s data sources, such as Google Drive, Amazon S3, and others, to automatically sync documents with RAG Engine. To enable data source integration, contact us
In this example, we will upload three documents with company policies (working from abroad, hybrid work guidelines, IT security).
This can be done with a simple call using our Python SDK or an HTTP request:
from ai21 import AI21Client
client = AI21Client(
# This is the default and can be omitted
file_id = client.library.files.create(file_path=file_path)
You can also do it via our Studio platform:
You can upload a file as it is, store it in a directory (for those who like working with directories) or add labels. This can help you organize your filing system, while focusing your questions on a subset of documents.
Currently, we only support uploading files in .txt format. In the case of .pdf files, we recommend using PyPDF2 to convert them to .txt (but perhaps not optimally if there are tables). Native support for PDFs is coming soon!
Your users can now ask a question and immediately get the answer with attribution to the relevant source. The system works as follows:
The question is used as a query for a retrieval mechanism, which searches over the entire knowledge base and retrieves the most relevant contexts.
With rapid changes occurring in work environments lately, a common question from employees is about working remotely:
response = client.library.answer.create(question="How many days can I work from home?")
The response will be:
Two days a week
Note that the full response returned from the model also contains the sources used as context (see “sources” field). You can see the full response here.
However, if the answer to the question is not in any of the documents, the model will indicate that by returning an empty response. For instance, if we will ask the following question:
response = client.library.answer.create(question="What's my meal allowance when working from home?")
The response will be empty. You can use the field answerInContext in the response to check this. For example, for the question above:
print("The answer is not in the documents")
The code will return The answer is not in the documents.
You can also do it via our Studio platform:
If you have a large collection of documents and files, it can be helpful to refine your retrieval process. By adding labels and assigning paths to each document, you can narrow down your process and achieve more accurate results, ultimately saving time. We provide several options for that purpose:
- Search within a specific path in your library: Focus your search on a particular location within your knowledge base.
- Search only for documents with specific labels: Filter your search to include only documents that have been assigned certain labels.
- Search within a designated group of documents: Specify the document IDs of a particular set of files, allowing the model to perform the search exclusively within that group.
The latter option is particularly valuable when you have an existing retrieval mechanism in place and you have an idea of which document(s) might contain the answer, where those documents might be quite lengthy. Since our contextual-answers mechanism operates on document chunks rather than entire documents, you will obtain precise answers quickly and efficiently, without having to process the entire lengthy document.
Updated 11 days ago