# Mixtral 8x22B Instruct Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe ## Model Information - **Organization**: [Mistral](/llm.txt) - **Slug**: mixtral-8x22b-instruct - **Available at Providers**: 9 - **Release Date**: April 17, 2024 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [OpenRouter](/llm/openrouter.txt) | Mixtral 8x22B Instruct | 2.00 | 6.00 | | [View](https://openrouter.ai/mistralai/mixtral-8x22b-instruct) | | [ValorGPT](/llm/valorgpt.txt) | Mixtral 8x22B Instruct | | | | [View](https://www.valorgpt.com/models/mistralai-mixtral-8x22b-instruct) | | [Vercel AI Gateway](/llm/vercel.txt) | Mixtral MoE 8x22B Instruct | 1.20 | 1.20 | | | | [Yupp](/llm/yupp.txt) | Mixtral 8x22B Instruct (OpenRouter) | | | | | | [LangDB](/llm/langdb.txt) | mixtral-8x22b-instruct | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | Mistral: Mixtral 8x22B Instruct | 2.00 | 6.00 | | | | [Blackbox AI](/llm/blackboxai.txt) | Mistral: Mixtral 8x22B Instruct | 0.90 | 0.90 | | | | [WaveSpeed AI](/llm/wavespeed.txt) | mixtral-8x22b-instruct | 2.20 | 6.60 | | | | [Writingmate](/llm/writingmate.txt) | Mistral: Mixtral 8x22B Instruct | | | | [View](https://writingmate.ai/models/mistralai/mixtral-8x22b-instruct) | --- [← Back to all providers](/llm.txt)