# MiniMax M2.5 MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning. ## Model Information - **Organization**: [Minimax](/llm.txt) - **Slug**: minimax-m2-5 - **Available at Providers**: 31 - **Release Date**: February 12, 2026 ### Benchmark Scores - Weekly: 4.22 - SWE Bench: 0.802 - Browsecomp: 0.763 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Kilo Code](/llm/kilocode.txt) | MiniMax: MiniMax M2.5 (free) | 0.00 | 0.00 | Yes | [View](https://kilo.ai/models/minimax/minimax-m2.5) | | [Kilo Code](/llm/kilocode.txt) | MiniMax: MiniMax M2.5 | 0.30 | 1.20 | | [View](https://kilo.ai/models/minimax/minimax-m2.5) | | [NetMind](/llm/netmind.txt) | minimax-2.5 | 0.00 | 0.00 | Yes | | | [Novita AI](/llm/novita.txt) | minimax-m2.5 | 0.30 | 1.20 | | | | [Okara](/llm/okara.txt) | Minimax M2.5 | | | | [View](https://okara.ai/ai-models/MiniMax-M2.5) | | [OpenRouter](/llm/openrouter.txt) | MiniMax M2.5 | 0.30 | 1.20 | | [View](https://openrouter.ai/minimax/minimax-m2.5-20260211) | | [Requesty](/llm/requesty.txt) | | 0.30 | 1.20 | | | | [Yupp](/llm/yupp.txt) | MiniMax M2.5 (MiniMax) | | | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Ollama Cloud](/llm/ollama.txt) | minimax-m2.5 | | | | [View](https://ollama.com/library/minimax-m2.5) | | [OpenCode Zen](/llm/opencode.txt) | minimax-m2.5 | 0.00 | 0.00 | Yes | | | [Vercel AI Gateway](/llm/vercel.txt) | MiniMax M2.5 | 0.30 | 1.20 | | | | [ZenMUX](/llm/zenmux.txt) | MiniMax: MiniMax M2.5 | 0.30 | 1.20 | | | | [LLM Stats](/llm/llmstats.txt) | MiniMax M2.5 | | | | | | [Yupp](/llm/yupp.txt) | MiniMax M2.5 (OpenRouter) | | | | | | [GMI Cloud](/llm/gmi.txt) | MiniMaxAI/MiniMax-M2.5 | 0.30 | 1.20 | | | | [MiniMax (China)](/llm/minimaxcn.txt) | MiniMax-M2.5 | 0.30 | 1.20 | | | | [MiniMax](/llm/minimax.txt) | MiniMax-M2.5 | 0.30 | 1.20 | | | | [Nano-GPT](/llm/nanogpt.txt) | MiniMax M2.5 | | | | | | [Venice](/llm/venice.txt) | MiniMax M2.5 | 0.40 | 1.60 | | | | [AIHubMix](/llm/aihubmix.txt) | coding-minimax-m2.5-free | 0.00 | 0.00 | Yes | [View](https://aihubmix.com/model/coding-minimax-m2.5-free) | | [AIHubMix](/llm/aihubmix.txt) | minimax-m2.5 | 0.29 | 1.15 | | [View](https://aihubmix.com/model/minimax-m2.5) | | [AIHubMix](/llm/aihubmix.txt) | cc-minimax-m2.5 | 0.10 | 0.10 | | [View](https://aihubmix.com/model/cc-minimax-m2.5) | | [AIHubMix](/llm/aihubmix.txt) | coding-minimax-m2.5 | 0.20 | 0.20 | | [View](https://aihubmix.com/model/coding-minimax-m2.5) | | [Routeway](/llm/routeway.txt) | MiniMax: MiniMax M2.5 | 0.34 | 1.37 | | [View](https://routeway.ai/models) | | [CommonStack](/llm/commonstack.txt) | MiniMax M2.5 | 0.30 | 1.20 | | | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/minimax/minimax-m2.5 | | | | | | [CometAPI](/llm/cometapi.txt) | MiniMax M2.5 | 0.24 | 0.96 | | | | [302.AI](/llm/302ai.txt) | MiniMax-M2.5 | 0.30 | 1.20 | | [View](https://302ai-en.apifox.cn/api-207705112) | | [Fireworks AI](/llm/fireworks.txt) | MiniMax-M2.5 | 1.20 | | | | | [Parasail](/llm/parasail.txt) | Minimax M25 | | | | [View](https://www.saas.parasail.io/pricing) | --- [← Back to all providers](/llm.txt)