# MiniMax M2 MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning, tool use, and multi-step task execution while maintaining low latency and deployment efficiency. The model excels in code generation, multi-file editing, compile-run-fix loops, and test-validated repair, showing strong results on SWE-Bench Verified, Multi-SWE-Bench, and Terminal-Bench. It also performs competitively in agentic evaluations such as BrowseComp and GAIA, effectively handling long-horizon planning, retrieval, and recovery from execution errors. Benchmarked by [Artificial Analysis](https://artificialanalysis.ai/models/minimax-m2), MiniMax-M2 ranks among the top open-source models for composite intelligence, spanning mathematics, science, and instruction-following. Its small activation footprint enables fast inference, high concurrency, and improved unit economics, making it well-suited for large-scale agents, developer assistants, and reasoning-driven applications that require responsiveness and cost efficiency. To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks). ## Model Information - **Organization**: [Minimax](/llm.txt) - **Slug**: minimax-m2 - **Available at Providers**: 41 - **Release Date**: October 27, 2025 ### Benchmark Scores - AIME 2025: 0.78 - HLE: 0.125 - GPQA: 0.78 - SWE Bench: 0.694 - Terminal: 0.463 - Browsecomp: 0.44 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | coding-minimax-m2 | 0.20 | 0.20 | | [View](https://aihubmix.com/model/coding-minimax-m2) | | [AIHubMix](/llm/aihubmix.txt) | coding-minimax-m2-free | 0.00 | 0.00 | Yes | [View](https://aihubmix.com/model/coding-minimax-m2-free) | | [AIHubMix](/llm/aihubmix.txt) | cc-minimax-m2 | 0.10 | 0.10 | | [View](https://aihubmix.com/model/cc-minimax-m2) | | [AIHubMix](/llm/aihubmix.txt) | minimax-m2 | 0.29 | 1.15 | | [View](https://aihubmix.com/model/minimax-m2) | | [AIHubMix](/llm/aihubmix.txt) | cc-MiniMax-M2 | 0.10 | 0.10 | | [View](https://aihubmix.com/model/cc-MiniMax-M2) | | [Nvidia](/llm/nvidia.txt) | minimax-m2 | 0.00 | 0.00 | Yes | [View](https://build.nvidia.com/minimaxai/minimax-m2) | | [Novita AI](/llm/novita.txt) | minimax-m2 | 0.30 | 1.20 | | | | [MiniMax](/llm/minimax.txt) | MiniMax-M2 | 0.30 | 1.20 | | | | [MiniMax (China)](/llm/minimaxcn.txt) | MiniMax-M2 | 0.30 | 1.20 | | | | [Nano-GPT](/llm/nanogpt.txt) | MiniMax M2 | | | | | | [Ollama Cloud](/llm/ollama.txt) | minimax-m2 | | | | [View](https://ollama.com/library/minimax-m2) | | [OpenRouter](/llm/openrouter.txt) | MiniMax M2 | 0.26 | 1.00 | | [View](https://openrouter.ai/minimax/minimax-m2) | | [Poe](/llm/poe.txt) | Minimax-M2 | 3300.00 | | | [View](https://poe.com/minimax-m2/api) | | [Requesty](/llm/requesty.txt) | | 0.30 | 1.20 | | | | [ValorGPT](/llm/valorgpt.txt) | MI | | | | [View](https://www.valorgpt.com/models/minimax-minimax-m2) | | [Vercel AI Gateway](/llm/vercel.txt) | MiniMax M2 | 0.30 | 1.20 | | | | [Yupp](/llm/yupp.txt) | MiniMax M2 (OpenRouter) | | | | | | [ZenMUX](/llm/zenmux.txt) | MiniMax: MiniMax M2 | 0.30 | 1.20 | | | | [Atlas Cloud](/llm/atlascloud.txt) | MiniMax-M2 | 0.20 | 1.00 | | [View](https://www.atlascloud.ai/models/MiniMaxAI/MiniMax-M2) | | [Routeway](/llm/routeway.txt) | MiniMax: MiniMax M2 (Free) | 0.00 | 0.00 | Yes | [View](https://routeway.ai/models) | | [Routeway](/llm/routeway.txt) | MiniMax: MiniMax M2 | 0.25 | 0.85 | | [View](https://routeway.ai/models) | | [LangDB](/llm/langdb.txt) | minimax-m2 | | | | [View](https://langdb.ai/app/models) | | [302.AI](/llm/302ai.txt) | MiniMax-M2 | 0.33 | 1.32 | | [View](https://302ai-en.apifox.cn/api-207705112) | | [Yupp](/llm/yupp.txt) | MiniMax M2 (MiniMax) | | | | | | [Kilo Code](/llm/kilocode.txt) | MiniMax: MiniMax M2 | 0.26 | 1.00 | | | | [GMI Cloud](/llm/gmi.txt) | MiniMax-M2 | 0.30 | 1.20 | | | | [Google Vertex AI](/llm/googlevertex.txt) | Minimax M2 | | | | | | [Blackbox AI](/llm/blackboxai.txt) | MiniMax: MiniMax M2 | 0.26 | 1.02 | | | | [302.AI](/llm/302ai.txt) | minimax/minimax-m2 | 0.30 | 1.20 | | [View](https://302ai-en.apifox.cn/api-308032503) | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Cortecs](/llm/cortecs.txt) | MiniMax-M2 | 0.39 | 1.57 | | | | [Together AI](/llm/togetherai.txt) | MiniMax M2 | 0.00 | 0.00 | | | | [Okara](/llm/okara.txt) | Minimax M2 | | | | [View](https://okara.ai/ai-models/minimax-m2) | | [StreamLake](/llm/streamlake.txt) | MiniMax-M2 | 0.30 | 1.20 | | | | [CommonStack](/llm/commonstack.txt) | MiniMax: MiniMax M2 | 0.30 | 1.20 | | | | [RouterLink](/llm/routerlink.txt) | MiniMax M2 | | | | | | [CometAPI](/llm/cometapi.txt) | minimax-m2 | 0.24 | 0.96 | | | | [Qiniu](/llm/qiniuai.txt) | Minimax/Minimax-M2 | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | minimax-m2 | 0.28 | 1.10 | | | | [Writingmate](/llm/writingmate.txt) | MiniMax: MiniMax M2 | | | | [View](https://writingmate.ai/models/minimax/minimax-m2) | | [LLM Stats](/llm/llmstats.txt) | MiniMax M2 | | | | | --- [← Back to all providers](/llm.txt)