SiliconFlow

inclusionAI/Ling-mini-2.0

SiliconFlow ling-mini-2-0

Model Information
Slug ling-mini-2-0
Aliases ling-mini-2-0 lingmini20
Organization
Name SiliconFlow

Ling-mini-2.0 is an open-source Mixture-of-Experts (MoE) large language model designed to balance strong task performance with high inference efficiency. It has 16B total parameters, with approximately 1.4B activated per token (about 789M non-embedding). Trained on over 20T tokens and refined via multi-stage supervised fine-tuning and reinforcement learning, it is reported to deliver strong results in complex reasoning and instruction following while keeping computational costs low. According to the upstream release, it reaches top-tier performance among sub-10B dense LLMs and in some cases matches or surpasses larger MoE models.

Available at 4 Providers
Provider Model Name Original Model Input ($/1M) Output ($/1M) Free
AIHubMix icon AIHubMix Ling-mini-2.0 inclusionAI/Ling-mini-2.0 $0.07 $0.27
SiliconFlow (China) icon SiliconFlow (China) inclusionAI/Ling-mini-2.0 inclusionAI/Ling-mini-2.0 $0.07 $0.28 Visit
SiliconFlow icon SiliconFlow inclusionAI/Ling-mini-2.0 inclusionAI/Ling-mini-2.0 $0.07 $0.28 Visit
ZenMUX icon ZenMUX inclusionAI: Ling-mini-2.0 inclusionai/ling-mini-2.0 $0.07 $0.28 Visit