# Qwen2.5 32B Instruct Qwen2.5 32B Instruct is the instruction-tuned variant of the latest Qwen large language model series. It provides enhanced instruction-following capabilities, improved proficiency in coding and mathematical reasoning, and robust handling of structured data and outputs such as JSON. It supports long-context processing up to 128K tokens and multilingual tasks across 29+ languages. The model has 32.5 billion parameters, 64 layers, and utilizes an advanced transformer architecture with RoPE, SwiGLU, RMSNorm, and Attention QKV bias. For more details, please refer to the [Qwen2.5 Blog](https://qwenlm.github.io/blog/qwen2.5/) . ## Model Information - **Organization**: [qwen](/llm.txt) - **Slug**: qwen-2-5-32b-instruct - **Available at Providers**: 10 - **Release Date**: September 19, 2024 ### Benchmark Scores - GPQA: 0.495 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | qwen2.5-32b-instruct | 0.60 | 1.20 | | [View](https://aihubmix.com/model/qwen2.5-32b-instruct) | | [AIHubMix](/llm/aihubmix.txt) | Qwen2.5-32B-Instruct | 0.60 | 0.60 | | [View](https://aihubmix.com/model/Qwen/Qwen2.5-32B-Instruct) | | [Alibaba](/llm/alibaba.txt) | Qwen2.5 32B Instruct | 0.70 | 2.80 | | | | [Alibaba (China)](/llm/alibabacn.txt) | Qwen2.5 32B Instruct | 0.29 | 0.86 | | | | [SiliconFlow (China)](/llm/siliconflowcn.txt) | Qwen/Qwen2.5-32B-Instruct | 0.18 | 0.18 | | | | [SiliconFlow](/llm/siliconflow.txt) | Qwen2.5-32B-Instruct | 0.18 | 0.18 | | [View](https://www.siliconflow.com./models/qwen2-5-32b-instruct) | | [Together AI](/llm/togetherai.txt) | Qwen2.5 32B Instruct | 0.00 | 0.00 | | | | [CometAPI](/llm/cometapi.txt) | | 0.96 | 0.96 | | | | [ApiYI](/llm/apiyi.txt) | qwen2.5-32b-instruct | | | | | | [Writingmate](/llm/writingmate.txt) | Qwen: Qwen2.5 32B Instruct | | | | [View](https://writingmate.ai/models/qwen/qwen2.5-32b-instruct) | --- [← Back to all providers](/llm.txt)