# inclusionAI/Ling-flash-2.0 Ling-flash-2.0 is an open-source Mixture-of-Experts (MoE) language model developed under the Ling 2.0 architecture. It features 100 billion total parameters, with 6.1 billion activated during inference (4.8B non-embedding). Trained on over 20 trillion tokens and refined with supervised fine-tuning and multi-stage reinforcement learning, the model demonstrates strong performance against dense models up to 40B parameters. It excels in complex reasoning, code generation, and frontend development. ## Model Information - **Organization**: [SiliconFlow](/llm.txt) - **Slug**: ling-flash-2-0 - **Available at Providers**: 6 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | Ling-flash-2.0 | 0.14 | 0.54 | | [View](https://aihubmix.com/model/inclusionAI/Ling-flash-2.0) | | [SiliconFlow (China)](/llm/siliconflowcn.txt) | inclusionAI/Ling-flash-2.0 | 0.14 | 0.57 | | | | [SiliconFlow](/llm/siliconflow.txt) | Ling-flash-2.0 | 0.14 | 0.57 | | [View](https://www.siliconflow.com./models/ling-flash-2-0) | | [ZenMUX](/llm/zenmux.txt) | inclusionAI: Ling-flash-2.0 | 0.28 | 2.80 | | | | [302.AI](/llm/302ai.txt) | inclusionAI/Ling-flash-2.0 | 0.14 | 0.57 | | [View](https://302ai-en.apifox.cn/api-252564719) | | [Arena AI](/llm/arenaai.txt) | | | | | | --- [← Back to all providers](/llm.txt)