Jamba-1.5 models

Jamba 1.5-Mini & Jamba 1.5-Large Model Details

Overview

Jamba is the world’s first production-grade Mamba based model. By enhancing Mamba Structured State Space model (SSM) technology with elements of the traditional Transformer architecture, Jamba compensates for the inherent limitations of a pure SSM model. Offering a 256K effective context window, it provides remarkable gains in throughput and efficiency. Notably, Jamba outperforms or matches other state-of-the-art models in its size class on a wide range of benchmarks.

Supported Languages

Jamba models officially support 9 languages:

  • English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic, Hebrew.

What Are They Good For?

Both Jamba 1.5 Mini and Jamba 1.5 Large models were trained on a massive corpus of text, making them highly versatile general purpose text-generators, capable of composing human-like text and solving complex tasks such as question answering, text summarization, information extraction, drafting, text classification and many others.
Jamba’s release marks two significant milestones in LLM innovation: successfully incorporating Mamba alongside the Transformer architecture and advancing the hybrid SSM-Transformer model to production-grade scale and quality.

Until now, LLMs have been primarily built on the conventional Transformer architecture. While undoubtedly powerful, this architecture presents two major drawbacks:

  • Large memory footprint: Transformer's memory footprint scales with context length. This makes it challenging to run long context windows or numerous parallel batches without extensive hardware resources, limiting widespread opportunities to experiment and deploy.
  • Slow inference as context grows: Transformer’s attention mechanism scales quadratically with sequence length and slows down throughput, as each token depends on the entire sequence that came before it—placing long context use cases outside the scope of efficient production.

Jamba’s release marks two significant milestones in LLM innovation:

  1. Successfully incorporating Mamba alongside the Transformer architecture.
  2. Advancing the hybrid SSM-Transformer model to production-grade scale and quality.

Model Details

Engineers and data scientists at AI21 labs created the model to help developers and businesses leverage AI to build real-world products with tangible value. Jamba 1.5 Mini and Jamba 1.5 Large supports zero-shot instruction-following and multi-language support. The Jamba models also provides developers with industry-leading APIs that perform a wide range of productivity tasks designed for commercial use.

  • Organization developing model: AI21 Labs
  • Model date: August 22nd, 2024
  • Model size: 52B parameters (12B active)
  • Model type: Joint Attention and Mamba (Jamba)
  • Knowledge cutoff date March 5th, 2024
  • Input Modality: Text
  • Output Modality: Text
  • License: Jamba open model license
  • Contact: [email protected]

Intended Use

The Jamba family of models was designed and built primarily for developers and researchers who access its capabilities via the AI21 Studio API, our cloud partners or by running it on a local or private environment.

Given the limitations of Jamba (see below) and its wide range of general capabilities, AI21 has a set of responsible use guidelines and terms of use for our customers. We work with our customers to review their applications and have systems in place to redress issues and revoke access, if necessary.

While Jamba 1.5 Mini and Jamba 1.5 Large are general purpose language models, there are a number of tasks at which it excels. The following is a list of popular uses among our customers, but is by no means exhaustive.

Popular Use Cases

  • Language modeling and completion: Text generation based on prompting/examples (zero-shot, multi-shot)
  • Instruction following: Text generation and summarization based on natural language instructions
  • Sentiment analysis: Categorization of text/document based on understanding its meaning
  • Paraphrasing: Rewriting up to a full paragraph of text
  • Summarization: Providing a summary of long-form articles and conversation transcripts
  • Text recommendation: Offering improvements to a given text, for example increasing and diversifying vocabulary
  • Grammatical error correction: Checking the grammar in written work
  • Text segmentation: Splitting long pieces of text into appropriate segments based on topics
  • Question answering and chat: Single and multi-turn conversations grounded in reference data

Performance and Benchmarks

BenchmarkJamba 1.5 MiniJamba 1.5 Large
Arena Hard46.165.4
MMLU69.781.2
MMLU Pro42.553.5
GPQA32.336.9
ARC Challenge85.793
BFCL80.685.48
GSM-8K75.587
RealToxicity8.16.7

RULER Benchmark - Effective Context Length

ModelsClaimed LengthEffective Length4K8K16K32K64K128K256K
Jamba 1.5 Large (94B/398B)256K256K96.796.696.49695.495.193.9
Jamba 1.5 Mini (12B/52B)256K256K95.795.294.793.892.789.886.1
Gemini-1.5-pro1M> 128K96.795.89695.995.994.4--
GPT-4-1106-preview128K64K96.696.395.293.28781.2--
Llama3.1 (70B)128K64K96.595.895.494.888.466.6--
Command-R-plus (104B)128K32K95.695.294.29284.363.1--
Llama3.1 (8B)128K32K95.593.891.687.484.777--
Mistral-Large 2 (123B)128K32K96.296.195.19378.823.7--
Mixtral-8x22B (39B/141B)64K32K95.694.993.490.984.731.7--
Mixtral-8x7B (12.9B/46.7B)32K32K94.992.192.585.972.444.5--

Distribution

Jamba 1.5 models are available as an API directly from the AI21 SaaS platform as well as on the following partners and platforms: Microsoft Azure, Google Cloud Vertex AI, Amazon Bedrock, Snowflake Cortex, NVIDIA NIM, LangChain and LlamaIndex, plus coming soon to Databricks Marketplace and Together.AI.

Safety Benchmarks

BenchmarkJamba 1.5 MiniJamba 1.5 Large
RealToxicity*8.16.7
TruthfulQA (0-shot)54.158.3

*Lower score is better

Model Compliance and Certifications

Ethical Considerations

AI21 Labs is on a mission to supercharge human productivity with machines working alongside humans as thought partners, thereby promoting human welfare and prosperity. To deliver its promise, this technology must be deployed and used in a responsible and sustainable way, taking into consideration potential risks, including malicious use by bad actors, accidental misuse and broader societal harms. We take these risks extremely seriously and put measures in place to mitigate them.

AI21 provides open access to Jamba that can be used to power a large variety of useful applications. We believe it is important to ensure that this technology is used in a responsible way, while allowing developers the freedom they need to experiment rapidly and deploy solutions at scale. Overall, we view the safe implementation of this technology as a partnership and collaboration between AI21 and our customers and encourage engagement and dialogue to raise the bar on responsible usage.

In order to use Jamba, you are required to comply with our Terms of Use and with the following usage guidelines. Provided you comply with these requirements, you may use Jamba to power applications with live users without any additional approval. We reserve the right to limit or suspend your access to Jamba at any time where we believe these terms or guidelines are violated.

Please check these usage guidelines periodically, as they may be updated from time to time. For any questions, clarifications or concerns, please contact [email protected].

Limitations

There are a number of limitations inherent to neural networks technology that apply to Jamba. These limitations require explanation and carry important caveats for the application and usage of Jamba.

  • Accuracy: Jamba, like other large pretrained language models, lacks important context about the world because it is trained on textual data and is not grounded in other modalities of experience such as video, real-world physical interaction, and human feedback. Like all language models, Jamba is far more accurate when responding to inputs similar to its training datasets. Novel inputs have a tendency to generate higher variance in its output.
  • Coherence and consistency: Responses from Jamba are sometimes inconsistent, contradictory, or contain seemingly random sentences and paragraphs.
  • Western/English bias: Jamba is trained primarily on English language text from the internet, and is best suited to classifying, searching, summarizing, and generating English text. Furthermore, Jamba has a tendency to hold and amplify the biases contained in its training dataset. As a result, groups of people who were not involved in the creation of the training data can be underrepresented, and stereotypes and prejudices can be perpetuated. Racial, religious, gender, socioeconomic, and other categorizations of human groups can be considered among these factors.
  • Explainability: It is difficult to explain or predict how Jamba will respond without additional training and fine tuning. This is a common issue with neural networks of this scope and scale.
  • Recency: Jamba was trained on a dataset created in March 2024, and therefore has no knowledge of events that have occurred after that date. We update our models regularly to keep them as current as possible, but there are notable gaps and inaccuracies in responses as a result of this lack of recency.

Given these known challenges presented by neural network technology, we've developed usage guidelines to minimize harm and address the limitations of Jamba.