# QwQ-32B QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini. ## Model Information - **Organization**: [qwen](/llm.txt) - **Slug**: qwq-32b - **Available at Providers**: 24 - **Release Date**: March 5, 2025 ### Benchmark Scores - GPQA: 0.652 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | QwQ-32B | 0.28 | 0.84 | | [View](https://aihubmix.com/model/QwQ-32B) | | [AIHubMix](/llm/aihubmix.txt) | QwQ-32B | 0.14 | 0.56 | | [View](https://aihubmix.com/model/Qwen/QwQ-32B) | | [Nvidia](/llm/nvidia.txt) | qwq-32b | 0.00 | 0.00 | Yes | [View](https://build.nvidia.com/qwen/qwq-32b) | | [Abacus](/llm/abacus.txt) | QwQ 32B | 0.40 | 0.40 | | | | [Alibaba (China)](/llm/alibabacn.txt) | QwQ 32B | 0.29 | 0.86 | | | | [SiliconFlow (China)](/llm/siliconflowcn.txt) | Qwen/QwQ-32B | 0.15 | 0.58 | | | | [SiliconFlow](/llm/siliconflow.txt) | QwQ-32B | 0.15 | 0.58 | | [View](https://www.siliconflow.com./models/qwq-32b) | | [Cloudflare AI Gateway](/llm/cloudflareaigateway.txt) | QwQ 32B | 0.66 | 1.00 | | | | [Nano-GPT](/llm/nanogpt.txt) | Qwen: QwQ 32B | | | | | | [OpenRouter](/llm/openrouter.txt) | QwQ 32B | 0.15 | 0.40 | | [View](https://openrouter.ai/qwen/qwq-32b) | | [ValorGPT](/llm/valorgpt.txt) | QwQ 32B | | | | [View](https://www.valorgpt.com/models/qwen-qwq-32b) | | [Yupp](/llm/yupp.txt) | QwQ 32B (OpenRouter) | | | | | | [Routeway](/llm/routeway.txt) | Qwen: QwQ 32B | 0.50 | 1.00 | | [View](https://routeway.ai/models) | | [LangDB](/llm/langdb.txt) | qwq-32b | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | Qwen: QwQ 32B | 0.15 | 0.40 | | | | [Baidu AI Studio](/llm/baidu.txt) | | | | | [View](https://aistudio.baidu.com) | | [Cloudflare Workers AI](/llm/cloudflareworkersai.txt) | QwQ 32B | 0.66 | 1.00 | | | | [Inference](/llm/inference.txt) | | | | | | | [Blackbox AI](/llm/blackboxai.txt) | Qwen: QwQ 32B | 0.07 | 0.15 | | | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Together AI](/llm/togetherai.txt) | Qwen QwQ-32B | 0.00 | 0.00 | | | | [ApiYI](/llm/apiyi.txt) | qwq-32b | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | qwq-32b | 0.17 | 0.44 | | | | [Writingmate](/llm/writingmate.txt) | Qwen: QwQ 32B | | | | [View](https://writingmate.ai/models/qwen/qwq-32b) | --- [← Back to all providers](/llm.txt)