# GPT-5.2 Codex GPT-5.2-Codex is an upgraded version of GPT-5.1-Codex optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5.1-Codex, 5.2-Codex is more steerable, adheres closely to developer instructions, and produces cleaner, higher-quality code outputs. Reasoning effort can be adjusted with the `reasoning.effort` parameter. Read the [docs here](https://openrouter.ai/docs/use-cases/reasoning-tokens#reasoning-effort-level) Codex integrates into developer environments including the CLI, IDE extensions, GitHub, and cloud tasks. It adapts reasoning effort dynamically—providing fast responses for small tasks while sustaining extended multi-hour runs for large projects. The model is trained to perform structured code reviews, catching critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs such as images or screenshots for UI development and integrates tool use for search, dependency installation, and environment setup. Codex is intended specifically for agentic coding applications. ## Model Information - **Organization**: [OpenAI](/llm.txt) - **Slug**: gpt-5-2-codex - **Available at Providers**: 54 - **Release Date**: January 14, 2026 ### Benchmark Scores - Weekly: 1.25 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | gpt-5.2-codex | 1.75 | 14.00 | | [View](https://aihubmix.com/model/gpt-5.2-codex) | | [GitHub Copilot](/llm/githubcopilot.txt) | GPT-5.2-Codex | 0.00 | 0.00 | Yes | | | [Azure OpenAI](/llm/azure.txt) | GPT-5.2 Codex | 1.75 | 14.00 | | | | [OpenAI](/llm/openai.txt) | GPT-5.2 Codex | 1.75 | 14.00 | | | | [Azure AI Services](/llm/azurecognitiveservices.txt) | GPT-5.2 Codex | 1.75 | 14.00 | | | | [Nano-GPT](/llm/nanogpt.txt) | GPT 5.2 Codex | | | | | | [OpenRouter](/llm/openrouter.txt) | GPT-5.2-Codex | 1.75 | 14.00 | | [View](https://openrouter.ai/openai/gpt-5.2-codex-20260114) | | [OpenCode Zen](/llm/opencode.txt) | gpt-5.2-codex | 1.75 | 14.00 | | | | [Poe](/llm/poe.txt) | GPT-5.2-Codex | 1.60 | 13.00 | | [View](https://poe.com/gpt-5.2-codex/api) | | [Vercel AI Gateway](/llm/vercel.txt) | GPT-5.2-Codex | 1.75 | 14.00 | | | | [Vivgrid](/llm/vivgrid.txt) | | 1.75 | 14.00 | | | | [Yupp](/llm/yupp.txt) | GPT-5.2 Codex (Low) (OpenRouter) | | | | | | [Yupp](/llm/yupp.txt) | GPT-5.2 Codex (OpenRouter) | | | | | | [ZenMUX](/llm/zenmux.txt) | OpenAI: GPT-5.2-Codex | 1.75 | 14.00 | | | | [Requesty](/llm/requesty.txt) | | 1.75 | 14.00 | | | | [AIMLAPI](/llm/aimlapi.txt) | GPT-5.2 Codex | 1.84 | 14.70 | | | | [Requesty](/llm/requesty.txt) | | 1.75 | 14.00 | | | | [Requesty](/llm/requesty.txt) | | 1.75 | 14.00 | | | | [Kilo Code](/llm/kilocode.txt) | OpenAI: GPT-5.2-Codex | 1.75 | 14.00 | | | | [Roo Code](/llm/roocode.txt) | GPT-5.2-Codex | 1.75 | 14.00 | | [View](https://roocode.com) | | [302.AI](/llm/302ai.txt) | gpt-5.2-codex | 1.75 | 14.00 | | [View](https://302ai-en.apifox.cn/api-207705102) | | [Cline](/llm/cline.txt) | GPT 5.2 Codex | | | | | | [Blackbox AI](/llm/blackboxai.txt) | OpenAI: GPT-5.2-Codex | 1.75 | 14.00 | | | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex High | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex Low | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex Extra High | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex Fast | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex High Fast | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex Low Fast | | | | [View](https://cursor.com/docs/models) | | [Cursor](/llm/cursor.txt) | GPT-5.2 Codex Extra High Fast | | | | [View](https://cursor.com/docs/models) | | [Arena AI](/llm/arenaai.txt) | | | | | | | [Jiekou.AI](/llm/jiekou.txt) | gpt-5.2-codex | 1.75 | 14.00 | | | | [FastRouter](/llm/fastrouter.txt) | OpenAI: GPT-5.2-Codex | 1.75 | 14.00 | | [View](https://fastrouter.ai/models/openai/gpt-5.2-codex) | | [Warp](/llm/warp.txt) | GPT-5.2 Codex | | | | [View](https://docs.warp.dev/agent-platform/capabilities/model-choice) | | [Factory.ai](/llm/factoryai.txt) | GPT-5.2-Codex | | | | | | [CometAPI](/llm/cometapi.txt) | GPT-5.2 Codex | 1.40 | 11.20 | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (Low Reasoning) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (Medium Reasoning) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (High Reasoning) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (Extra High Reasoning) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (Low Reasoning Fast) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (Medium Reasoning Fast) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (High Reasoning Fast) | | | | | | [Windsurf](/llm/windsurf.txt) | GPT-5.2-Codex (Extra High Reasoning Fast) | | | | | | [QiHang](/llm/qihangai.txt) | GPT-5.2 Codex | 0.14 | 1.14 | | | | [Zed](/llm/zed.txt) | GPT-5.2 Codex | 1.38 | 11.00 | | [View](https://zed.dev/docs/ai/models) | | [ApiYI](/llm/apiyi.txt) | gpt-5.2-codex | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | gpt-5.2-codex | 1.93 | 15.40 | | | | [ZO Computer](/llm/zocomputer.txt) | GPT-5.2 Codex | | | | | | [Airforce API](/llm/airforce.txt) | gpt-5.2-codex | | | | | | [Writingmate](/llm/writingmate.txt) | OpenAI: GPT-5.2-Codex | | | | [View](https://writingmate.ai/models/openai/gpt-5.2-codex) | | [Cloudflare AI Gateway](/llm/cloudflareaigateway.txt) | GPT-5.2 Codex | 1.75 | 14.00 | | | | [LLM Stats](/llm/llmstats.txt) | GPT-5.2 Codex | | | | | --- [← Back to all providers](/llm.txt)