InclusionAI

inclusionAI: LLaDA2-flash-CAP

InclusionAI llada2-0-flash-cap
Model Information
Slug llada2-0-flash-cap
LLM.txt View
Organization
Model Description
LLaDA2.0-flash-CAP is an enhanced version of LLaDA2.0-flash, which significantly improves inference efficiency by incorporating Confidence-Aware Parallelism (CAP) training technology. Based on a 100B total parameter Mixture of Experts (MoE) diffusion architecture, this model achieves faster parallel decoding speeds while maintaining exceptional performance across various benchmark tests.
Available at 1 Provider
Provider Type Model Name Original Model Input ($/1M) Output ($/1M) Free Actions
ZenMUX
ZenMUX
inclusionAI: LLaDA2-flash-CAP
inclusionai/llada2.0-flash-cap $0.28 $2.85