# Codestral Mamba A 7.3B parameter Mamba-based model designed for code and reasoning tasks. - Linear time inference, allowing for theoretically infinite sequence lengths - 256k token context window - Optimized for quick responses, especially beneficial for code productivity - Performs comparably to state-of-the-art transformer models in code and reasoning tasks - Available under the Apache 2.0 license for free use, modification, and distribution ## Model Information - **Organization**: [Mistral](/llm.txt) - **Slug**: codestral-mamba - **Available at Providers**: 1 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Writingmate](/llm/writingmate.txt) | Mistral: Codestral Mamba | | | | [View](https://writingmate.ai/models/mistralai/codestral-mamba) | --- [← Back to all providers](/llm.txt)