# Qwen Max Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown. ## Model Information - **Organization**: [Alibaba](/llm.txt) - **Slug**: qwen-max - **Available at Providers**: 18 - **Release Date**: February 1, 2025 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | qwen-max | 0.38 | 1.52 | | [View](https://aihubmix.com/model/qwen-max) | | [AIMLAPI](/llm/aimlapi.txt) | Qwen max | 2.08 | 8.32 | | [View](https://aimlapi.com/models/qwen-max-api) | | [AIMLAPI](/llm/aimlapi.txt) | Qwen Max 2025-01-25 | | | | [View](https://aimlapi.com/models/qwen-max-2025-01-25-api) | | [Alibaba](/llm/alibaba.txt) | Qwen Max | 1.60 | 6.40 | | | | [Alibaba (China)](/llm/alibabacn.txt) | Qwen Max | 0.35 | 1.38 | | | | [Nano-GPT](/llm/nanogpt.txt) | Qwen 2.5 Max | | | | | | [OpenRouter](/llm/openrouter.txt) | Qwen-Max | 1.60 | 6.40 | | [View](https://openrouter.ai/qwen/qwen-max-2025-01-25) | | [Requesty](/llm/requesty.txt) | | 1.60 | 6.40 | | | | [ValorGPT](/llm/valorgpt.txt) | Qwen-Max | | | | [View](https://www.valorgpt.com/models/qwen-qwen-max) | | [Yupp](/llm/yupp.txt) | Qwen Max | | | | | | [Yupp](/llm/yupp.txt) | Qwen Max (Jan'25) | | | | | | [LangDB](/llm/langdb.txt) | qwen-max | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | Qwen: Qwen-Max | 1.60 | 6.40 | | | | [Blackbox AI](/llm/blackboxai.txt) | Qwen: Qwen-Max | 1.60 | 6.40 | | | | [Qiniu](/llm/qiniuai.txt) | Qwen2.5-Max-2025-01-25 | | | | | | [ApiYI](/llm/apiyi.txt) | qwen-max | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | qwen-max | 1.76 | 7.04 | | | | [Writingmate](/llm/writingmate.txt) | Qwen: Qwen-Max | | | | [View](https://writingmate.ai/models/qwen/qwen-max) | --- [← Back to all providers](/llm.txt)