Organization
Model Description
Ring-1T is a trillion-parameter sparse mixture-of-experts (MoE) thinking model developed by inclusionAI. It adopts the Ling 2.0 architecture and is trained on the Ling-1T-base foundation model, which contains 1 trillion total parameters with 50 billion activated parameters, supporting a context window of up to 128K tokens. Building upon the preview version released at the end of September, Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model.