# minimax-m1 MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B. ## Model Information - **Organization**: [Minimax](/llm.txt) - **Slug**: minimax-m1 - **Available at Providers**: 11 - **Release Date**: June 17, 2025 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Nano-GPT](/llm/nanogpt.txt) | MiniMax M1 | | | | | | [OpenRouter](/llm/openrouter.txt) | MiniMax M1 | 0.40 | 2.20 | | [View](https://openrouter.ai/minimax/minimax-m1) | | [ValorGPT](/llm/valorgpt.txt) | MI | | | | [View](https://www.valorgpt.com/models/minimax-minimax-m1) | | [Yupp](/llm/yupp.txt) | MiniMax M1 (OpenRouter) | | | | | | [LangDB](/llm/langdb.txt) | minimax-m1 | | | | [View](https://langdb.ai/app/models) | | [302.AI](/llm/302ai.txt) | MiniMax-M1 | 0.13 | 1.25 | | | | [Kilo Code](/llm/kilocode.txt) | MiniMax: MiniMax M1 | 0.40 | 2.20 | | | | [Blackbox AI](/llm/blackboxai.txt) | MiniMax: MiniMax M1 | 0.30 | 1.65 | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Qiniu](/llm/qiniuai.txt) | MiniMax M1 | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | minimax-m1 | 0.44 | 2.42 | | | --- [← Back to all providers](/llm.txt)