# inclusionAI: Ring-1T Ring-1T is a trillion-parameter sparse mixture-of-experts (MoE) thinking model developed by inclusionAI. It adopts the Ling 2.0 architecture and is trained on the Ling-1T-base foundation model, which contains 1 trillion total parameters with 50 billion activated parameters, supporting a context window of up to 128K tokens. Building upon the preview version released at the end of September, Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model. ## Model Information - **Organization**: [InclusionAI](/llm.txt) - **Slug**: ring-1t - **Available at Providers**: 4 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | Ring-1T | 0.55 | 2.19 | | [View](https://aihubmix.com/model/inclusionAI/Ring-1T) | | [Bailing](/llm/bailing.txt) | Ring-1T | 0.57 | 2.29 | | | | [ZenMUX](/llm/zenmux.txt) | inclusionAI: Ring-1T | 0.56 | 2.24 | | | | [Arena AI](/llm/arenaai.txt) | | | | | | --- [← Back to all providers](/llm.txt)