SiliconFlow

inclusionAI/Ling-flash-2.0

SiliconFlow ling-flash-2-0

Model Information
Slug ling-flash-2-0
Aliases ling-flash-2-0 lingflash20
Organization
Name SiliconFlow

Ling-flash-2.0 is an open-source Mixture-of-Experts (MoE) language model developed under the Ling 2.0 architecture. It features 100 billion total parameters, with 6.1 billion activated during inference (4.8B non-embedding). Trained on over 20 trillion tokens and refined with supervised fine-tuning and multi-stage reinforcement learning, the model demonstrates strong performance against dense models up to 40B parameters. It excels in complex reasoning, code generation, and frontend development.

Available at 4 Providers
Provider Model Name Original Model Input ($/1M) Output ($/1M) Free
AIHubMix icon AIHubMix Ling-flash-2.0 inclusionAI/Ling-flash-2.0 $0.14 $0.54
SiliconFlow (China) icon SiliconFlow (China) inclusionAI/Ling-flash-2.0 inclusionAI/Ling-flash-2.0 $0.14 $0.57 Visit
SiliconFlow icon SiliconFlow inclusionAI/Ling-flash-2.0 inclusionAI/Ling-flash-2.0 $0.14 $0.57 Visit
ZenMUX icon ZenMUX inclusionAI: Ling-flash-2.0 inclusionai/ling-flash-2.0 $0.28 $2.80 Visit