Model Information
| Slug | minimax-m2 |
|---|---|
| LLM.txt | View |
| Release Date | October 27, 2025 |
| AIME 2025 | 0.78 |
| HLE | 0.125 |
| GPQA | 0.78 |
| SWE Bench | 0.694 |
| Terminal | 0.463 |
| Browse Comp | 0.44 |
Organization
| Name | Minimax |
|---|---|
| Website | https://www.minimax.io/ |
Model Description
MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning, tool use, and multi-step task execution while maintaining low latency and deployment efficiency.
The model excels in code generation, multi-file editing, compile-run-fix loops, and test-validated repair, showing strong results on SWE-Bench Verified, Multi-SWE-Bench, and Terminal-Bench. It also performs competitively in agentic evaluations such as BrowseComp and GAIA, effectively handling long-horizon planning, retrieval, and recovery from execution errors.
Benchmarked by [Artificial Analysis](https://artificialanalysis.ai/models/minimax-m2), MiniMax-M2 ranks among the top open-source models for composite intelligence, spanning mathematics, science, and instruction-following. Its small activation footprint enables fast inference, high concurrency, and improved unit economics, making it well-suited for large-scale agents, developer assistants, and reasoning-driven applications that require responsiveness and cost efficiency.
To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks).
The model excels in code generation, multi-file editing, compile-run-fix loops, and test-validated repair, showing strong results on SWE-Bench Verified, Multi-SWE-Bench, and Terminal-Bench. It also performs competitively in agentic evaluations such as BrowseComp and GAIA, effectively handling long-horizon planning, retrieval, and recovery from execution errors.
Benchmarked by [Artificial Analysis](https://artificialanalysis.ai/models/minimax-m2), MiniMax-M2 ranks among the top open-source models for composite intelligence, spanning mathematics, science, and instruction-following. Its small activation footprint enables fast inference, high concurrency, and improved unit economics, making it well-suited for large-scale agents, developer assistants, and reasoning-driven applications that require responsiveness and cost efficiency.
To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks).
Available at 41 Providers
| Provider | Type | Model Name | Original Model | Input ($/1M) | Output ($/1M) | Free | Actions | |
|---|---|---|---|---|---|---|---|---|
|
|
AIHubMix |
coding-minimax-m2-free
|
coding-minimax-m2-free
|
$0.00 | $0.00 | |||
|
|
Nvidia |
minimax-m2
|
minimaxai/minimax-m2
|
$0.00 | $0.00 | |||
|
|
Routeway |
MiniMax: MiniMax M2 (Free)
|
minimax-m2:free
|
$0.00 | $0.00 | |||
|
|
Together AI |
MiniMax M2
|
MiniMaxAI/MiniMax-M2
|
$0.00 | $0.00 | |||
|
|
AIHubMix |
cc-minimax-m2
|
cc-minimax-m2
|
$0.10 | $0.10 | |||
|
|
AIHubMix |
cc-MiniMax-M2
|
cc-MiniMax-M2
|
$0.10 | $0.10 | |||
|
|
AIHubMix |
coding-minimax-m2
|
coding-minimax-m2
|
$0.20 | $0.20 | |||
|
|
Atlas Cloud |
MiniMax-M2
|
MiniMaxAI/MiniMax-M2
|
$0.20 | $1.00 | |||
|
|
CometAPI |
minimax-m2
|
minimax-m2
|
$0.24 | $0.96 | |||
|
|
Routeway |
MiniMax: MiniMax M2
|
minimax-m2
|
$0.25 | $0.85 | |||
|
|
OpenRouter |
Chat
Code
|
MiniMax M2
|
minimax/minimax-m2
|
$0.26 | $1.00 | ||
|
|
Kilo Code |
Code
|
MiniMax: MiniMax M2
|
minimax/minimax-m2
|
$0.26 | $1.00 | ||
|
|
Blackbox AI |
Code
|
MiniMax: MiniMax M2
|
minimax/minimax-m2
|
$0.26 | $1.02 | ||
|
|
WaveSpeed AI |
Chat
Code
|
minimax-m2
|
minimax/minimax-m2
|
$0.28 | $1.10 | ||
|
|
AIHubMix |
minimax-m2
|
minimax-m2
|
$0.29 | $1.15 | |||
|
|
Novita AI |
minimax-m2
|
minimax/minimax-m2
|
$0.30 | $1.20 | |||
|
|
MiniMax |
MiniMax-M2
|
MiniMax-M2
|
$0.30 | $1.20 | |||
|
|
MiniMax (China) |
MiniMax-M2
|
MiniMax-M2
|
$0.30 | $1.20 | |||
|
|
Requesty |
minimaxi/MiniMax-M2
|
$0.30 | $1.20 | ||||
|
|
Vercel AI Gateway |
MiniMax M2
|
minimax-m2
|
$0.30 | $1.20 | |||
|
|
ZenMUX |
MiniMax: MiniMax M2
|
minimax/minimax-m2
|
$0.30 | $1.20 | |||
|
|
GMI Cloud |
MiniMax-M2
|
MiniMaxAI/MiniMax-M2
|
$0.30 | $1.20 | |||
|
|
302.AI |
minimax/minimax-m2
|
minimax/minimax-m2
|
$0.30 | $1.20 | |||
|
|
StreamLake |
MiniMax-M2
|
MiniMax-M2
|
$0.30 | $1.20 | |||
|
|
CommonStack |
MiniMax: MiniMax M2
|
minimax/minimax-m2
|
$0.30 | $1.20 | |||
|
|
302.AI |
MiniMax-M2
|
MiniMax-M2
|
$0.33 | $1.32 | |||
|
|
Cortecs |
MiniMax-M2
|
minimax-m2
|
$0.39 | $1.57 | |||
|
|
Poe |
Minimax-M2
|
minimax-m2
|
$3,300.00 | - | |||
|
|
Nano-GPT |
MiniMax M2
|
MiniMax-M2
|
- | - | |||
|
|
Ollama Cloud |
minimax-m2
|
minimax-m2
|
- | - | |||
|
|
ValorGPT |
MI
|
minimax-minimax-m2
|
- | - | |||
|
|
Yupp |
Chat
|
MiniMax M2 (OpenRouter)
|
minimax/minimax-m2
|
- | - | ||
|
|
LangDB |
minimax-m2
|
minimax-m2
|
- | - | |||
|
|
Yupp |
Chat
|
MiniMax M2 (MiniMax)
|
MiniMax-M2
|
- | - | ||
|
|
Google Vertex AI |
Minimax M2
|
minimax-m2
|
- | - | |||
|
|
Arena AI |
Chat
|
minimax-m2
|
- | - | |||
|
|
Okara |
Chat
|
Minimax M2
|
minimax-m2
|
- | - | ||
|
|
RouterLink |
MiniMax M2
|
MiniMax M2
|
- | - | |||
|
|
Qiniu |
Minimax/Minimax-M2
|
minimax/minimax-m2
|
- | - | |||
|
|
Writingmate |
Chat
Code
|
MiniMax: MiniMax M2
|
minimax/minimax-m2
|
- | - | ||
|
|
LLM Stats |
Chat
|
MiniMax M2
|
minimax-m2
|
- | - |