Tokenization is the process of converting text into numerical tokens that language models can understand. Learn how to use AI21’s tokenizer for Jamba models with practical examples.
Tokenization is both the first and final step in language model processing. Since machine learning models can only work with numerical data, text must be converted into numbers that models can understand and manipulate.The tokenization process breaks down text into smaller units called tokens, which can represent:
Words or subwords: “hello” → [15496]
Characters: “AI” → [32, 73]
Byte-level representations: For handling any Unicode text across all languages
Each token is assigned a unique numerical ID from the model’s vocabulary.
Tokenization serves as both the entry point and exit point of text processing in language models. Since models can only work with numerical data, text must be converted into tokens with corresponding numerical indices from the tokenizer’s vocabulary.In a standard language model workflow:
Encoding Phase: We first convert input text into tokens using a tokenizer. Each token receives a unique index number that the model can process.
Model Processing: The tokenized input flows through the model architecture:
Embedding layer: Transforms tokens into dense vector representations that capture semantic relationships
Transformer blocks: Process these vectors to understand context, relationships, and generate meaningful responses
Decoding Phase: Finally, we convert the model’s output tokens back into readable text by mapping token indices back to their corresponding words or subwords using the tokenizer’s vocabulary.
This encode → process → decode cycle ensures seamless conversion between human language and machine-readable formats, enabling effective communication with language models.
Choose the appropriate tokenizer for your Jamba model:
Copy
Ask AI
from ai21_tokenizer import Tokenizer, PreTrainedTokenizerstokenizer = Tokenizer.get_tokenizer(PreTrainedTokenizers.JAMBA_MINI_TOKENIZER)text = "Jamba Mini model says hello"encoded = tokenizer.encode(text)print(f"Jamba Mini encoded: {encoded}")