SiliconFlow

inclusionAI/Ling-flash-2.0

SiliconFlow ling-flash-2-0
Model Information
Slug ling-flash-2-0
LLM.txt View
Organization
Model Description
Ling-flash-2.0 is an open-source Mixture-of-Experts (MoE) language model developed under the Ling 2.0 architecture. It features 100 billion total parameters, with 6.1 billion activated during inference (4.8B non-embedding).

Trained on over 20 trillion tokens and refined with supervised fine-tuning and multi-stage reinforcement learning, the model demonstrates strong performance against dense models up to 40B parameters. It excels in complex reasoning, code generation, and frontend development.
Available at 6 Providers
Provider Type Model Name Original Model Input ($/1M) Output ($/1M) Free Actions
AIHubMix
AIHubMix
Ling-flash-2.0
inclusionAI/Ling-flash-2.0 $0.14 $0.54
SiliconFlow (China)
SiliconFlow (China)
inclusionAI/Ling-flash-2.0
inclusionAI/Ling-flash-2.0 $0.14 $0.57
SiliconFlow
SiliconFlow
Ling-flash-2.0
inclusionAI/Ling-flash-2.0 $0.14 $0.57
302.AI
302.AI
inclusionAI/Ling-flash-2.0
inclusionAI/Ling-flash-2.0 $0.14 $0.57
ZenMUX
ZenMUX
inclusionAI: Ling-flash-2.0
inclusionai/ling-flash-2.0 $0.28 $2.80
Arena AI
Arena AI
Chat
ling-flash-2.0 - -