SiliconFlow
•
ling-flash-2-0
| Slug |
ling-flash-2-0
|
|---|---|
| Aliases | ling-flash-2-0 lingflash20 |
| Name | SiliconFlow |
|---|
Ling-flash-2.0 is an open-source Mixture-of-Experts (MoE) language model developed under the Ling 2.0 architecture. It features 100 billion total parameters, with 6.1 billion activated during inference (4.8B non-embedding). Trained on over 20 trillion tokens and refined with supervised fine-tuning and multi-stage reinforcement learning, the model demonstrates strong performance against dense models up to 40B parameters. It excels in complex reasoning, code generation, and frontend development.
| Provider | Model Name | Original Model | Input ($/1M) | Output ($/1M) | Free | ||
|---|---|---|---|---|---|---|---|
|
|
AIHubMix | Ling-flash-2.0 |
inclusionAI/Ling-flash-2.0
|
$0.14 | $0.54 | ||
|
|
SiliconFlow (China) | inclusionAI/Ling-flash-2.0 |
inclusionAI/Ling-flash-2.0
|
$0.14 | $0.57 | Visit | |
|
|
SiliconFlow | inclusionAI/Ling-flash-2.0 |
inclusionAI/Ling-flash-2.0
|
$0.14 | $0.57 | Visit | |
|
|
ZenMUX | inclusionAI: Ling-flash-2.0 |
inclusionai/ling-flash-2.0
|
$0.28 | $2.80 | Visit |