# Mixtral 8x7B Instruct Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe ## Model Information - **Organization**: [Mistral](/llm.txt) - **Slug**: mixtral-8x7b-instruct - **Available at Providers**: 8 - **Release Date**: December 10, 2023 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [OpenRouter](/llm/openrouter.txt) | Mixtral 8x7B Instruct | 0.54 | 0.54 | | [View](https://openrouter.ai/mistralai/mixtral-8x7b-instruct) | | [ValorGPT](/llm/valorgpt.txt) | Mixtral 8x7B Instruct | | | | [View](https://www.valorgpt.com/models/mistralai-mixtral-8x7b-instruct) | | [Yupp](/llm/yupp.txt) | Mixtral 8x7B Instruct (OpenRouter) | | | | | | [LangDB](/llm/langdb.txt) | mixtral-8x7b-instruct | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | Mistral: Mixtral 8x7B Instruct | 0.54 | 0.54 | | [View](https://kilo.ai/models/mistralai/mixtral-8x7b-instruct) | | [Blackbox AI](/llm/blackboxai.txt) | Mistral: Mixtral 8x7B Instruct | 0.08 | 0.24 | | | | [WaveSpeed AI](/llm/wavespeed.txt) | mixtral-8x7b-instruct | 0.59 | 0.59 | | | | [Writingmate](/llm/writingmate.txt) | Mistral: Mixtral 8x7B Instruct | | | | [View](https://writingmate.ai/models/mistralai/mixtral-8x7b-instruct) | --- [← Back to all providers](/llm.txt)