# MiniMax M2.1 MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency. Compared to its predecessor, M2.1 delivers cleaner, more concise outputs and faster perceived response times. It shows leading multilingual coding performance across major systems and application languages, achieving 49.4% on Multi-SWE-Bench and 72.5% on SWE-Bench Multilingual, and serves as a versatile agent “brain” for IDEs, coding tools, and general-purpose assistance. To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks). ## Model Information - **Organization**: [Minimax](/llm.txt) - **Slug**: minimax-m2-1 - **Available at Providers**: 53 - **Release Date**: December 23, 2025 ### Benchmark Scores - Weekly: 2.1 - AIME 2025: 0.81 - HLE: 0.22 - GPQA: 0.81 - SWE Bench: 0.67 - Terminal: 0.479 - Browsecomp: 0.62 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | cc-minimax-m2.1 | 0.10 | 0.10 | | [View](https://aihubmix.com/model/cc-minimax-m2.1) | | [AIHubMix](/llm/aihubmix.txt) | coding-minimax-m2.1 | 0.20 | 0.20 | | [View](https://aihubmix.com/model/coding-minimax-m2.1) | | [AIHubMix](/llm/aihubmix.txt) | coding-minimax-m2.1-free | 0.00 | 0.00 | Yes | [View](https://aihubmix.com/model/coding-minimax-m2.1-free) | | [AIHubMix](/llm/aihubmix.txt) | minimax-m2.1 | 0.29 | 1.15 | | [View](https://aihubmix.com/model/minimax-m2.1) | | [Fireworks AI](/llm/fireworks.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | | | [Novita AI](/llm/novita.txt) | minimax-m2.1 | 0.30 | 1.20 | | | | [Hugging Face](/llm/huggingface.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | | | [MiniMax](/llm/minimax.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | | | [MiniMax (China)](/llm/minimaxcn.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | | | [DeepInfra](/llm/deepinfra.txt) | MiniMax-M2.1 | 0.27 | 0.95 | | | | [Nano-GPT](/llm/nanogpt.txt) | MiniMax M2.1 | | | | | | [Nano-GPT](/llm/nanogpt.txt) | MiniMax M2.1 TEE | | | | | | [Ollama Cloud](/llm/ollama.txt) | minimax-m2.1 | | | | [View](https://ollama.com/library/minimax-m2.1) | | [OpenRouter](/llm/openrouter.txt) | MiniMax M2.1 | 0.27 | 0.95 | | [View](https://openrouter.ai/minimax/minimax-m2.1) | | [Poe](/llm/poe.txt) | Minimax-M2.1 | 0.30 | 1.20 | | [View](https://poe.com/minimax-m2.1/api) | | [Synthetic.new](/llm/synthetic.txt) | MiniMaxAI/MiniMax-M2.1 | 0.30 | 1.20 | | | | [Venice](/llm/venice.txt) | MiniMax M2.1 | 0.40 | 1.60 | | | | [Vercel AI Gateway](/llm/vercel.txt) | MiniMax M2.1 | 0.30 | 1.20 | | | | [Yupp](/llm/yupp.txt) | MiniMax M2.1 (MiniMax) | | | | | | [Yupp](/llm/yupp.txt) | MiniMax M2.1 (OpenRouter) | | | | | | [ZenMUX](/llm/zenmux.txt) | MiniMax: MiniMax M2.1 | 0.30 | 1.20 | | | | [Atlas Cloud](/llm/atlascloud.txt) | MiniMax M2.1 | 0.30 | 1.20 | | [View](https://www.atlascloud.ai/models/minimaxai/minimax-m2.1) | | [SiliconFlow](/llm/siliconflow.txt) | MiniMax-M2.1 | 0.29 | 1.20 | | [View](https://www.siliconflow.com./models/minimax-m2-1) | | [Nebius Token Factory](/llm/nebius.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | [View](https://huggingface.co/MiniMaxAI/MiniMax-M2.1) | | [SiliconFlow (China)](/llm/siliconflowcn.txt) | Pro/MiniMaxAI/MiniMax-M2.1 | 0.30 | 1.20 | | | | [Moark](/llm/moark.txt) | MiniMax-M2.1 | 2.10 | 8.40 | | | | [OpenCode Zen](/llm/opencode.txt) | minimax-m2.1 | 0.30 | 1.20 | | | | [302.AI](/llm/302ai.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | [View](https://302ai-en.apifox.cn/api-207705112) | | [Kilo Code](/llm/kilocode.txt) | MiniMax: MiniMax M2.1 | 0.27 | 0.95 | | | | [GMI Cloud](/llm/gmi.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | | | [Friendli](/llm/friendli.txt) | MiniMax M2.1 | 0.30 | 1.20 | | | | [OpenCode Zen](/llm/opencode.txt) | minimax-m2.1 | 0.30 | 1.20 | | | | [Roo Code](/llm/roocode.txt) | MiniMax M2.1 | 0.30 | 1.20 | | [View](https://roocode.com) | | [Nvidia](/llm/nvidia.txt) | minimax-m2.1 | 0.00 | 0.00 | Yes | [View](https://build.nvidia.com/minimaxai/minimax-m2.1) | | [302.AI](/llm/302ai.txt) | minimax/minimax-m2.1 | 0.30 | 1.20 | | [View](https://302ai-en.apifox.cn/api-308032503) | | [Jiekou.AI](/llm/jiekou.txt) | Minimax M2.1 | 0.30 | 1.20 | | | | [Together AI](/llm/togetherai.txt) | Minimax M2.1 | 0.00 | 0.00 | | | | [Okara](/llm/okara.txt) | Minimax M2.1 | | | | [View](https://okara.ai/ai-models/minimax-m2.1) | | [StreamLake](/llm/streamlake.txt) | MiniMax-M2.1 | 0.30 | 1.20 | | | | [CommonStack](/llm/commonstack.txt) | MiniMax: MiniMax M2.1 | 0.30 | 1.20 | | | | [RouterLink](/llm/routerlink.txt) | MiniMax M2.1 | | | | | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/minimax/minimax-m2.1 | | | | | | [CometAPI](/llm/cometapi.txt) | MiniMax M2.1 | 0.24 | 0.96 | | | | [FastRouter](/llm/fastrouter.txt) | MiniMax: MiniMax M2.1 | 0.27 | 0.95 | | [View](https://fastrouter.ai/models/minimax/minimax-m2.1) | | [MegaNova](/llm/meganova.txt) | MiniMax M2.1 | 0.28 | 1.20 | | | | [Requesty](/llm/requesty.txt) | | 0.30 | 1.20 | | | | [Windsurf](/llm/windsurf.txt) | Minimax M2.1 | | | | | | [Qiniu](/llm/qiniuai.txt) | Minimax/Minimax-M2.1 | | | | | | [LangDB](/llm/langdb.txt) | minimax-m2.1 | | | | [View](https://langdb.ai/app/models) | | [ApiYI](/llm/apiyi.txt) | MiniMax-M2.1 | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | minimax-m2.1 | 0.30 | 1.05 | | | | [Writingmate](/llm/writingmate.txt) | MiniMax: MiniMax M2.1 | | | | [View](https://writingmate.ai/models/minimax/minimax-m2.1) | | [LLM Stats](/llm/llmstats.txt) | MiniMax M2.1 | | | | | --- [← Back to all providers](/llm.txt)