Jamba

Best-in-class LLM with unmatched throughput performance

Jamba series

Jamba is our best-in-class language model, built specifically to meet the needs of the enterprise. You can embed Jamba Instruct, our business-ready model which is fine tuned to follow instructions and geared with safety guardrails, or build it yourself with our open source base model.

Jamba has a knowledge cutoff of February 2024.

Jamba Instruct

Built on top of our flagship base model, Jamba Instruct is tailored for commercial use. It is a chat model with instruction-following capability, and integrates safety features and guardrails. Most importantly, this model is optimized for real-world deployment. Jamba responses can include markdown; if you do not want markdown in any responses, indicate it in your system or initial contextual prompt.

See the API

Jamba (base model)

The Jamba base model is available on Huggingface, where you can build and experiment with it. It is not provided as a foundation model in AI21 Platform.

Jamba combines the Structured State Space (SSM) technology of Mamba with Transformer architecture, enabling a large 256K context window. This hybrid approach improves throughput and efficiency while optimizing memory usage. The integration of these technologies allows for a model that benefits from the strengths of both architectures, offering enhanced performance and scalability.

Want to know more about Jamba? Check out the blog post.

Interested in the architecture? Read the whitepaper.