# Mixtral 8x22B (base) Mixtral 8x22B is a large-scale language model from Mistral AI. It consists of 8 experts, each 22 billion parameters, with each token using 2 experts at a time. It was released via [X](https://twitter.com/MistralAI/status/1777869263778291896). #moe ## Model Information - **Organization**: [Mistral](/llm.txt) - **Slug**: mixtral-8x22b - **Available at Providers**: 1 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Writingmate](/llm/writingmate.txt) | Mistral: Mixtral 8x22B (base) | | | | [View](https://writingmate.ai/models/mistralai/mixtral-8x22b) | --- [← Back to all providers](/llm.txt)