# GLM-4.5 GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) ## Model Information - **Organization**: [Z.ai](/llm.txt) - **Slug**: glm-4-5 - **Available at Providers**: 38 - **Release Date**: July 28, 2025 ### Benchmark Scores - HLE: 0.144 - GPQA: 0.791 - SWE Bench: 0.642 - Terminal: 0.375 - Browsecomp: 0.264 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | glm-4.5 | 0.40 | 1.60 | | [View](https://aihubmix.com/model/glm-4.5) | | [AIMLAPI](/llm/aimlapi.txt) | GLM 4.5 | | | | | | [FastRouter](/llm/fastrouter.txt) | Z.AI: GLM 4.5 | 0.60 | 2.20 | | [View](https://fastrouter.ai/models/z-ai/glm-4.5) | | [Abacus](/llm/abacus.txt) | GLM-4.5 | 0.60 | 2.20 | | | | [Novita AI](/llm/novita.txt) | glm-4.5 | 0.60 | 2.20 | | | | [SiliconFlow](/llm/siliconflow.txt) | zai-org/GLM-4.5 | 0.40 | 2.00 | | | | [Zhipu AI](/llm/zhipuai.txt) | GLM-4.5 | 0.60 | 2.20 | | | | [ModelScope](/llm/modelscope.txt) | GLM-4.5 | 0.00 | 0.00 | Yes | | | [Nano-GPT](/llm/nanogpt.txt) | GLM 4.5 (Thinking) | | | | | | [Nano-GPT](/llm/nanogpt.txt) | GLM 4.5 | | | | | | [OpenRouter](/llm/openrouter.txt) | GLM 4.5 | 0.55 | 2.00 | | [View](https://openrouter.ai/z-ai/glm-4.5) | | [Poe](/llm/poe.txt) | GLM-4.5 | 5700.00 | | | [View](https://poe.com/glm-4.5/api) | | [Requesty](/llm/requesty.txt) | | 0.60 | 2.20 | | | | [Requesty](/llm/requesty.txt) | | 0.60 | 2.20 | | | | [Requesty](/llm/requesty.txt) | | 0.60 | 2.20 | | | | [Requesty](/llm/requesty.txt) | | 0.60 | 2.20 | | | | [ValorGPT](/llm/valorgpt.txt) | GLM 4.5 | | | | [View](https://www.valorgpt.com/models/z-ai-glm-4.5) | | [Vercel AI Gateway](/llm/vercel.txt) | GLM 4.5 | 0.60 | 2.20 | | | | [Yupp](/llm/yupp.txt) | GLM 4.5 (Novita) | | | | | | [Yupp](/llm/yupp.txt) | GLM 4.5 (Z.ai) | | | | | | [Yupp](/llm/yupp.txt) | GLM 4.5 (OpenRouter) | | | | | | [Z.AI](/llm/zai.txt) | GLM-4.5 | 0.60 | 0.11 | | | | [ZenMUX](/llm/zenmux.txt) | Z.AI: GLM 4.5 | 0.35 | 1.54 | | | | [Nebius Token Factory](/llm/nebius.txt) | GLM-4.5 | 0.60 | 2.20 | | [View](https://huggingface.co/zai-org/GLM-4.5) | | [Glama](/llm/glama.txt) | glm-4.5 | 0.60 | 2.20 | | [View](https://glama.ai/gateway/models/glm-4.5) | | [LangDB](/llm/langdb.txt) | glm-4.5 | | | | [View](https://langdb.ai/app/models) | | [302.AI](/llm/302ai.txt) | glm-4.5 | 0.29 | 1.14 | | [View](https://302ai-en.apifox.cn/207705116e0) | | [SiliconFlow](/llm/siliconflow.txt) | GLM-4.5 | | | | | | [Kilo Code](/llm/kilocode.txt) | Z.ai: GLM 4.5 | 0.35 | 1.55 | | | | [302.AI](/llm/302ai.txt) | zai-org/glm-4.5 | 0.57 | 2.29 | | [View](https://302ai-en.apifox.cn/api-308032503) | | [Jiekou.AI](/llm/jiekou.txt) | GLM-4.5 | 0.60 | 2.20 | | | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/z-ai/glm-4.5 | | | | | | [CometAPI](/llm/cometapi.txt) | glm-4.5 | 0.96 | 3.84 | | | | [Qiniu](/llm/qiniuai.txt) | GLM 4.5 | | | | | | [ApiYI](/llm/apiyi.txt) | glm-4.5 | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | glm-4.5 | 0.39 | 1.71 | | | | [Airforce API](/llm/airforce.txt) | glm-4.5 | | | | | | [LLM Stats](/llm/llmstats.txt) | GLM-4.5 | | | | | --- [← Back to all providers](/llm.txt)