# Qwen3-235B-A22B-Thinking-2507 Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode () and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases. ## Model Information - **Organization**: [qwen](/llm.txt) - **Slug**: qwen3-235b-a22b-thinking-2507 - **Available at Providers**: 42 - **Release Date**: July 25, 2025 ### Benchmark Scores - AIME 2025: 0.923 - HLE: 0.182 - GPQA: 0.811 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | qwen3-235b-a22b-thinking-2507 | 0.28 | 2.80 | | [View](https://aihubmix.com/model/qwen3-235b-a22b-thinking-2507) | | [AIHubMix](/llm/aihubmix.txt) | qwen-3-235b-a22b-thinking-2507 | 0.28 | 2.80 | | [View](https://aihubmix.com/model/qwen-3-235b-a22b-thinking-2507) | | [AIHubMix](/llm/aihubmix.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.28 | 2.80 | | [View](https://aihubmix.com/model/Qwen3-235B-A22B-Thinking-2507) | | [AIMLAPI](/llm/aimlapi.txt) | Qwen 3 Thinking 2507 | | | | | | [Chutes.ai](/llm/chutes.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.11 | 0.60 | | | | [Novita AI](/llm/novita.txt) | qwen3-235b-a22b-thinking-2507 | 0.30 | 3.00 | | | | [SiliconFlow (China)](/llm/siliconflowcn.txt) | Qwen/Qwen3-235B-A22B-Thinking-2507 | 0.13 | 0.60 | | | | [SiliconFlow](/llm/siliconflow.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.13 | 0.60 | | [View](https://www.siliconflow.com./models/qwen3-235b-a22b-thinking-2507) | | [Hugging Face](/llm/huggingface.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.30 | 3.00 | | | | [Weights & Biases](/llm/wandb.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.10 | 0.10 | | | | [iFlow](/llm/iflowcn.txt) | Qwen3-235B-A22B-Thinking | 0.00 | 0.00 | Yes | | | [submodel](/llm/submodel.txt) | Qwen3 235B A22B Thinking 2507 | 0.20 | 0.60 | | | | [IO.NET](/llm/ionet.txt) | Qwen 3 235B Thinking | 0.11 | 0.60 | | | | [ModelScope](/llm/modelscope.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.00 | 0.00 | Yes | | | [Nano-GPT](/llm/nanogpt.txt) | Qwen 3 235b A22B 2507 Thinking | | | | | | [OpenRouter](/llm/openrouter.txt) | Qwen3 235B A22B Thinking 2507 | 0.00 | 0.00 | | [View](https://openrouter.ai/qwen/qwen3-235b-a22b-thinking-2507) | | [Synthetic.new](/llm/synthetic.txt) | Qwen/Qwen3-235B-A22B-Thinking-2507 | 0.65 | 3.00 | | | | [Together AI](/llm/togetherai.txt) | Qwen3 235B A22B Thinking 2507 FP8 | 0.65 | 3.00 | | | | [ValorGPT](/llm/valorgpt.txt) | Qwen3 235B A22B Thinking 2507 | | | | [View](https://www.valorgpt.com/models/qwen-qwen3-235b-a22b-thinking-2507) | | [Venice](/llm/venice.txt) | Qwen 3 235B A22B Thinking 2507 | 0.45 | 3.50 | | | | [Yupp](/llm/yupp.txt) | Qwen3 235B A22B Thinking 2507 FP8 (Together AI) | | | | | | [Yupp](/llm/yupp.txt) | Qwen3 235B A22B Thinking 2507 (OpenRouter) | | | | | | [Yupp](/llm/yupp.txt) | Qwen3-235B-A22B-Thinking-2507 | | | | | | [ZenMUX](/llm/zenmux.txt) | Qwen: Qwen3 235B A22B Thinking 2507 | 0.28 | 2.78 | | | | [Atlas Cloud](/llm/atlascloud.txt) | Qwen3 235B A22B Thinking 2507 | 0.20 | 2.30 | | [View](https://www.atlascloud.ai/models/qwen/qwen3-235b-a22b-thinking-2507) | | [DeepInfra](/llm/deepinfra.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.23 | 2.30 | | | | [Nebius Token Factory](/llm/nebius.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.20 | 0.80 | | [View](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) | | [Routeway](/llm/routeway.txt) | Alibaba: Qwen3 235B A22B Thinking 2507 | 0.18 | 0.70 | | [View](https://routeway.ai/models) | | [LangDB](/llm/langdb.txt) | qwen3-235b-a22b-thinking-2507 | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | Qwen: Qwen3 235B A22B Thinking 2507 | 0.00 | 0.00 | Yes | | | [302.AI](/llm/302ai.txt) | qwen3-235b-a22b-thinking-2507 | 0.29 | 2.86 | | [View](https://302ai-en.apifox.cn/207705113e0) | | [302.AI](/llm/302ai.txt) | sophnet/Qwen3-235B-A22B-Thinking-2507 | 0.29 | 2.86 | | [View](https://302ai-en.apifox.cn/api-319727732) | | [302.AI](/llm/302ai.txt) | Qwen/Qwen3-235B-A22B-Thinking-2507 | 0.36 | 1.43 | | [View](https://302ai-en.apifox.cn/api-252564719) | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Jiekou.AI](/llm/jiekou.txt) | Qwen3 235B A22b Thinking 2507 | 0.30 | 3.00 | | | | [StreamLake](/llm/streamlake.txt) | Qwen3-235B-A22B-Thinking-2507 | 0.35 | 4.20 | | | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/qwen/qwen3-235b-a22b-thinking-2507 | | | | | | [Chats-LLM](/llm/chatsllm.txt) | Qwen: Qwen3 235B A22B Thinking 2507 | 0.00 | 0.00 | Yes | | | [Qiniu](/llm/qiniuai.txt) | Qwen3 235B A22B Thinking 2507 | | | | | | [ApiYI](/llm/apiyi.txt) | qwen3-235b-a22b-thinking-2507 | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | qwen3-235b-a22b-thinking-2507 | 0.12 | 0.66 | | | | [LLM Stats](/llm/llmstats.txt) | Qwen3-235B-A22B-Thinking-2507 | | | | | --- [← Back to all providers](/llm.txt)