Jamba

Best-in-class quality LLM with unmatched throughput performance

πŸ¦–πŸ Jamba series

Jamba is our best-in-class Language Model, built specifically to meet the needs of the enterprise. You can embed Jamba Instruct, our business-ready model which is fine tuned to follow instructions and geared with safety guardrails, or build it yourself with our open source base model.

Jamba has a knowledge cutoff of February 2024.

Jamba Instruct

Built on top of our flagship base model, Jamba Instruct is tailored for commercial use. It is a chat model with instruction-following capability, and integrates safety features and guardrails. Most importantly, this model is optimized for real-world deployment. Jamba responses can include markdown; if you do not want markdown in any responses, indicate it in your system or initial contextual prompt.

Jamba (base model)

Jamba combines the Structured State Space (SSM) technology of Mamba with Transformer architecture, enabling a large 256K context window. This hybrid approach improves throughput and efficiency while optimizing memory usage. The integration of these technologies allows for a model that benefits from the strengths of both architectures, offering enhanced performance and scalability.

Want to build? Go to Jamba on Huggingface.

Want to know more about Jamba? Check out the blog post.

Interested in the architecture? Read the whitepaper.