SiliconFlow

inclusionAI/Ling-mini-2.0

SiliconFlow ling-mini-2-0
Model Information
Slug ling-mini-2-0
LLM.txt View
Organization
Model Description
Ling-mini-2.0 is an open-source Mixture-of-Experts (MoE) large language model designed to balance strong task performance with high inference efficiency. It has 16B total parameters, with approximately 1.4B activated per token (about 789M non-embedding). Trained on over 20T tokens and refined via multi-stage supervised fine-tuning and reinforcement learning, it is reported to deliver strong results in complex reasoning and instruction following while keeping computational costs low. According to the upstream release, it reaches top-tier performance among sub-10B dense LLMs and in some cases matches or surpasses larger MoE models.
Available at 5 Providers
Provider Type Model Name Original Model Input ($/1M) Output ($/1M) Free Actions
AIHubMix
AIHubMix
Ling-mini-2.0
inclusionAI/Ling-mini-2.0 $0.07 $0.27
SiliconFlow (China)
SiliconFlow (China)
inclusionAI/Ling-mini-2.0
inclusionAI/Ling-mini-2.0 $0.07 $0.28
SiliconFlow
SiliconFlow
inclusionAI/Ling-mini-2.0
inclusionAI/Ling-mini-2.0 $0.07 $0.28
ZenMUX
ZenMUX
inclusionAI: Ling-mini-2.0
inclusionai/ling-mini-2.0 $0.07 $0.28
302.AI
302.AI
inclusionAI/Ling-mini-2.0
inclusionAI/Ling-mini-2.0 $0.07 $0.29