# inclusionAI/Ling-mini-2.0 Ling-mini-2.0 is an open-source Mixture-of-Experts (MoE) large language model designed to balance strong task performance with high inference efficiency. It has 16B total parameters, with approximately 1.4B activated per token (about 789M non-embedding). Trained on over 20T tokens and refined via multi-stage supervised fine-tuning and reinforcement learning, it is reported to deliver strong results in complex reasoning and instruction following while keeping computational costs low. According to the upstream release, it reaches top-tier performance among sub-10B dense LLMs and in some cases matches or surpasses larger MoE models. ## Model Information - **Organization**: [SiliconFlow](/llm.txt) - **Slug**: ling-mini-2-0 - **Available at Providers**: 5 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | Ling-mini-2.0 | 0.07 | 0.27 | | [View](https://aihubmix.com/model/inclusionAI/Ling-mini-2.0) | | [SiliconFlow (China)](/llm/siliconflowcn.txt) | inclusionAI/Ling-mini-2.0 | 0.07 | 0.28 | | | | [SiliconFlow](/llm/siliconflow.txt) | Ling-mini-2.0 | 0.07 | 0.28 | | [View](https://www.siliconflow.com./models/ling-mini-2-0) | | [ZenMUX](/llm/zenmux.txt) | inclusionAI: Ling-mini-2.0 | 0.07 | 0.28 | | | | [302.AI](/llm/302ai.txt) | inclusionAI/Ling-mini-2.0 | 0.07 | 0.29 | | [View](https://302ai-en.apifox.cn/api-252564719) | --- [← Back to all providers](/llm.txt)