InclusionAI

inclusionAI: LLaDA2-flash-CAP

InclusionAI llada2-0-flash-cap
Model Information
Slug llada2-0-flash-cap
LLM.txt View
Organization
Model Description
LLaDA2.0-flash-CAP is an enhanced version of LLaDA2.0-flash, which significantly improves inference efficiency by incorporating Confidence-Aware Parallelism (CAP) training technology. Based on a 100B total parameter Mixture of Experts (MoE) diffusion architecture, this model achieves faster parallel decoding speeds while maintaining exceptional performance across various benchmark tests.