# Qwen3 Max Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the January 2025 version. It delivers higher accuracy in math, coding, logic, and science tasks, follows complex instructions in Chinese and English more reliably, reduces hallucinations, and produces higher-quality responses for open-ended Q&A, writing, and conversation. The model supports over 100 languages with stronger translation and commonsense reasoning, and is optimized for retrieval-augmented generation (RAG) and tool calling, though it does not include a dedicated “thinking” mode. ## Model Information - **Organization**: [qwen](/llm.txt) - **Slug**: qwen3-max - **Available at Providers**: 47 - **Release Date**: December 15, 2025 ### Benchmark Scores - Weekly: 0.12 - AIME 2025: 0.816 - GPQA: 0.62 - SWE Bench: 0.696 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | qwen3-max | 0.34 | 1.37 | | [View](https://aihubmix.com/model/qwen3-max) | | [Alibaba](/llm/alibaba.txt) | Qwen3 Max | 1.20 | 6.00 | | | | [Abacus](/llm/abacus.txt) | Qwen3 Max | 1.20 | 6.00 | | | | [Alibaba (China)](/llm/alibabacn.txt) | Qwen3 Max | 0.86 | 3.44 | | | | [Novita AI](/llm/novita.txt) | qwen3-max | | | | | | [iFlow](/llm/iflowcn.txt) | Qwen3-Max | 0.00 | 0.00 | Yes | | | [Nano-GPT](/llm/nanogpt.txt) | Qwen3 Max | | | | | | [OpenRouter](/llm/openrouter.txt) | Qwen3 Max | 1.20 | 6.00 | | [View](https://openrouter.ai/qwen/qwen3-max) | | [Poe](/llm/poe.txt) | Qwen3-Max | | | | [View](https://poe.com/qwen3-max/api) | | [Requesty](/llm/requesty.txt) | | 0.86 | 3.44 | | | | [ValorGPT](/llm/valorgpt.txt) | Qwen3 Max | | | | [View](https://www.valorgpt.com/models/qwen-qwen3-max) | | [Vercel AI Gateway](/llm/vercel.txt) | Qwen3 Max | 1.20 | 6.00 | | | | [Yupp](/llm/yupp.txt) | Qwen3 Max Instruct | | | | | | [Yupp](/llm/yupp.txt) | Qwen3 Max Instruct Preview (Novita) | | | | | | [ZenMUX](/llm/zenmux.txt) | Qwen: Qwen3-Max-Thinking | 1.20 | 6.00 | | | | [Glama](/llm/glama.txt) | qwen3-max-2025-09-23 | 1.60 | 6.40 | | [View](https://glama.ai/gateway/models/qwen3-max-2025-09-23) | | [LangDB](/llm/langdb.txt) | qwen3-max | | | | [View](https://langdb.ai/app/models) | | [Yupp](/llm/yupp.txt) | Qwen3 Max Thinking | | | | | | [AIHubMix](/llm/aihubmix.txt) | qwen3-max-2026-01-23 | 0.34 | 1.37 | | [View](https://aihubmix.com/model/qwen3-max-2026-01-23) | | [Nano-GPT](/llm/nanogpt.txt) | Qwen3 Max 2026-01-23 | | | | | | [302.AI](/llm/302ai.txt) | qwen3-max-2025-09-23 | 0.86 | 3.43 | | [View](https://302ai-en.apifox.cn/291197842e0) | | [Kilo Code](/llm/kilocode.txt) | Qwen: Qwen3 Max | 1.20 | 6.00 | | | | [302.AI](/llm/302ai.txt) | qwen3-max-2026-01-23 | 0.36 | 1.43 | | [View](https://302ai-en.apifox.cn/207705113e0) | | [302.AI](/llm/302ai.txt) | qwen3-max | 0.46 | 1.83 | | [View](https://302ai-en.apifox.cn/api-207705128) | | [Poe](/llm/poe.txt) | Qwen3-Max-Thinking | | | | [View](https://poe.com/qwen3-max-thinking/api) | | [Vercel AI Gateway](/llm/vercel.txt) | Qwen 3 Max Thinking | 1.20 | 6.00 | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [OpenRouter](/llm/openrouter.txt) | Qwen3 Max Thinking | 1.20 | 6.00 | | [View](https://openrouter.ai/qwen/qwen3-max-thinking-20260123) | | [Kilo Code](/llm/kilocode.txt) | Qwen: Qwen3 Max Thinking | 1.20 | 6.00 | | | | [Yupp](/llm/yupp.txt) | Qwen3 Max Instruct Preview (OpenRouter) | | | | | | [RouterLink](/llm/routerlink.txt) | Qwen3-Max-Thinking | | | | | | [DeepInfra](/llm/deepinfra.txt) | Qwen3-Max | 1.20 | 6.00 | | | | [CometAPI](/llm/cometapi.txt) | qwen3 max | 0.80 | 3.20 | | | | [DeepInfra](/llm/deepinfra.txt) | Qwen3-Max-Thinking | 1.20 | 6.00 | | | | [Qiniu](/llm/qiniuai.txt) | Qwen3 Max | | | | | | [LangDB](/llm/langdb.txt) | qwen3-max-thinking | | | | [View](https://langdb.ai/app/models) | | [ApiYI](/llm/apiyi.txt) | qwen3-max | | | | | | [ApiYI](/llm/apiyi.txt) | qwen3-max-2025-09-23 | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | qwen3-max | 1.32 | 6.60 | | | | [Airforce API](/llm/airforce.txt) | qwen3-max-2026-01-23 | | | | | | [Writingmate](/llm/writingmate.txt) | Qwen: Qwen3 Max Thinking | | | | [View](https://writingmate.ai/models/qwen/qwen3-max-thinking) | | [Writingmate](/llm/writingmate.txt) | Qwen: Qwen3 Max | | | | [View](https://writingmate.ai/models/qwen/qwen3-max) | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/qwen/qwen3-max | | | | | | [LLM Stats](/llm/llmstats.txt) | Qwen3 Max | | | | | --- [← Back to all providers](/llm.txt)