# GLM-4.7-Flash As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards. ## Model Information - **Organization**: [Z.ai](/llm.txt) - **Slug**: glm-4-7-flash - **Available at Providers**: 32 - **Release Date**: January 19, 2026 ### Benchmark Scores - Weekly: 0.09 - AIME 2025: 0.916 - HLE: 0.144 - GPQA: 0.752 - SWE Bench: 0.592 - Browsecomp: 0.428 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | glm-4.7-flash-free | 0.00 | 0.00 | Yes | [View](https://aihubmix.com/model/glm-4.7-flash-free) | | [Hugging Face](/llm/huggingface.txt) | GLM-4.7-Flash | 0.00 | 0.00 | Yes | | | [Nano-GPT](/llm/nanogpt.txt) | GLM 4.7 Flash Original | | | | | | [Nano-GPT](/llm/nanogpt.txt) | GLM 4.7 Flash | | | | | | [Nano-GPT](/llm/nanogpt.txt) | GLM 4.7 Flash Thinking | | | | | | [Novita AI](/llm/novita.txt) | glm-4.7-flash | 0.07 | 0.40 | | | | [OpenRouter](/llm/openrouter.txt) | GLM 4.7 Flash | 0.06 | 0.40 | | [View](https://openrouter.ai/z-ai/glm-4.7-flash-20260119) | | [Vercel AI Gateway](/llm/vercel.txt) | GLM 4.7 Flash | | | | | | [Yupp](/llm/yupp.txt) | GLM 4.7 (Novita) | | | | | | [Yupp](/llm/yupp.txt) | GLM 4.7 (OpenRouter) | | | | | | [Z.AI](/llm/zai.txt) | GLM-4.7-Flash | 0.00 | 0.00 | Yes | | | [ZenMUX](/llm/zenmux.txt) | Z.AI: GLM 4.7 Flash (Free) | 0.00 | 0.00 | Yes | | | [Nano-GPT](/llm/nanogpt.txt) | GLM 4.7 Flash TEE | | | | | | [Zhipu AI](/llm/zhipuai.txt) | GLM-4.7-Flash | 0.00 | 0.00 | Yes | | | [Poe](/llm/poe.txt) | GLM-4.7-Flash | 0.07 | 0.40 | | [View](https://poe.com/glm-4.7-flash/api) | | [Glama](/llm/glama.txt) | glm-4.7-flash | 0.07 | 0.40 | | [View](https://glama.ai/gateway/models/glm-4.7-flash) | | [DeepInfra](/llm/deepinfra.txt) | GLM-4.7-Flash | 0.06 | 0.40 | | | | [Kilo Code](/llm/kilocode.txt) | Z.ai: GLM 4.7 Flash | 0.06 | 0.40 | | | | [RedPill](/llm/redpill.txt) | Z.AI: GLM 4.7 Flash | 0.10 | 0.43 | | | | [IO.NET](/llm/ionet.txt) | GLM-4.7-Flash | 0.07 | 0.40 | | | | [GMI Cloud](/llm/gmi.txt) | ZAI: GLM-4.7-Flash | 0.07 | 0.40 | | | | [Venice](/llm/venice.txt) | GLM 4.7 Flash | 0.13 | 0.50 | | | | [302.AI](/llm/302ai.txt) | zai-org/glm-4.7-flash | 0.07 | 0.43 | | [View](https://302ai-en.apifox.cn/api-308032503) | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Jiekou.AI](/llm/jiekou.txt) | GLM-4.7-Flash | 0.07 | 0.40 | | | | [Okara](/llm/okara.txt) | GLM 4.7 Flash | | | | [View](https://okara.ai/ai-models/glm-4.7-flashx) | | [RouterLink](/llm/routerlink.txt) | GLM 4.7 Flash | | | | | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/z-ai/glm-4.7-flash | | | | | | [LangDB](/llm/langdb.txt) | glm-4.7-flash | | | | [View](https://langdb.ai/app/models) | | [Airforce API](/llm/airforce.txt) | glm-4.7-flash | | | | | | [Writingmate](/llm/writingmate.txt) | Z.ai: GLM 4.7 Flash | | | | [View](https://writingmate.ai/models/z-ai/glm-4.7-flash) | | [LLM Stats](/llm/llmstats.txt) | GLM-4.7-Flash | | | | | --- [← Back to all providers](/llm.txt)