# MoonshotAI: Kimi K2 0711 Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training. ## Model Information - **Organization**: [Moonshot AI](/llm.txt) - **Slug**: kimi-k2-0711 - **Available at Providers**: 3 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | kimi-k2-0711 | 0.54 | 2.16 | | [View](https://aihubmix.com/model/kimi-k2-0711) | | [Helicone](/llm/helicone.txt) | Kimi K2 (07/11) | 0.57 | 2.30 | | [View](https://www.helicone.ai/model/kimi-k2-0711) | | [ZenMUX](/llm/zenmux.txt) | MoonshotAI: Kimi K2 0711 | 0.56 | 2.23 | | | --- [← Back to all providers](/llm.txt)