July 3, 2025

Introducing Jamba 1.7!

Release Date: July 3, 2025

We’re excited to introduce the release of Jamba 1.7!

What’s New:

  • Smarter Answers with Enhanced Grounding: Jamba 1.7 now delivers more complete and accurate responses by better understanding context and focusing on what matters. Whether you’re tackling complex questions or seeking precise insights, Jamba 1.7 is an ideal fit for question-answering and instruction-following tasks.

  • Faster and More Efficient: Thanks to optimized configurations and quantization, Jamba 1.7 is delivering top-tier performance without sacrificing quality.

  • Self-Hosted Ready: Want more control? Jamba 1.7 is now available for self-hosted deployment, giving teams flexibility and scalability in their own environments.

    Get Started:

    Access via AI21 Studio or Hugging Face - Jamba Large 1.7, Jamba Mini 1.7.

March 22, 2025

Introducing Jamba 1.6!

Release Date: March 6, 2025

We’re pleased to announce the release of Jamba 1.6.

Key Updates:

  • Enhanced Model Quality: Jamba Large 1.6 outperforms leading open models from Cohere, Meta, and Mistral on quality (Arena Hard) and speed.
  • Long context processing: With a 256K context window and hybrid SSM-Transformer architecture, Jamba excels on efficiently and accurately processing long contexts, outperforming leading open model competitors on RAG and long context QA benchmarks
  • Secure Deployment: Available via AI21 Studio (SaaS) or to download from Hugging Face and deploy privately (VPC/on-prem) from Hugging Face. More deployment options coming soon.
  • Improved Efficiency: Faster response times with high accuracy.

Get Started:

Access via AI21 Studio or Hugging Face - Jamba Large 1.6, Jamba Mini 1.6.

July 3, 2025

Introducing Jamba 1.7!

Release Date: July 3, 2025

We’re excited to introduce the release of Jamba 1.7!

What’s New:

  • Smarter Answers with Enhanced Grounding: Jamba 1.7 now delivers more complete and accurate responses by better understanding context and focusing on what matters. Whether you’re tackling complex questions or seeking precise insights, Jamba 1.7 is an ideal fit for question-answering and instruction-following tasks.

  • Faster and More Efficient: Thanks to optimized configurations and quantization, Jamba 1.7 is delivering top-tier performance without sacrificing quality.

  • Self-Hosted Ready: Want more control? Jamba 1.7 is now available for self-hosted deployment, giving teams flexibility and scalability in their own environments.

    Get Started:

    Access via AI21 Studio or Hugging Face - Jamba Large 1.7, Jamba Mini 1.7.

March 22, 2025

Introducing Jamba 1.6!

Release Date: March 6, 2025

We’re pleased to announce the release of Jamba 1.6.

Key Updates:

  • Enhanced Model Quality: Jamba Large 1.6 outperforms leading open models from Cohere, Meta, and Mistral on quality (Arena Hard) and speed.
  • Long context processing: With a 256K context window and hybrid SSM-Transformer architecture, Jamba excels on efficiently and accurately processing long contexts, outperforming leading open model competitors on RAG and long context QA benchmarks
  • Secure Deployment: Available via AI21 Studio (SaaS) or to download from Hugging Face and deploy privately (VPC/on-prem) from Hugging Face. More deployment options coming soon.
  • Improved Efficiency: Faster response times with high accuracy.

Get Started:

Access via AI21 Studio or Hugging Face - Jamba Large 1.6, Jamba Mini 1.6.