Source: OpenAI Developer DocsApril 27, 2026

GPT-5.5 Becomes Default Codex Model with 400K Context Window

View original source →

During the week of April 27, OpenAI completed the broad rollout of GPT-5.5 as the recommended default model in Codex across Plus, Pro, Business, Enterprise, Edu, and Go tiers. The integration brings a 400K context window, a new Fast mode (1.5x faster, 2.5x cost), and NVIDIA's full infrastructure backing to the world's most-used coding agent.

Key Points:

• GPT-5.5 is now the recommended model for all Codex tasks, replacing GPT-5.4 as the default. Users on older app versions need to update to see it in their model picker.

• 400K context window in Codex (vs. 1M in the API) — still a major step up, allowing full codebase analysis and extended agentic sessions.

• Fast mode generates tokens roughly 1.5x faster at 2.5x the cost — a deliberate speed-vs-cost tradeoff for teams with high-throughput coding needs.

• NVIDIA engineers have been actively using GPT-5.5 through Codex to build and accelerate their own AI infrastructure — a meaningful third-party validation of production readiness.

• Key benchmarks: SWE-Bench 58.6%, Terminal-Bench 82.7%, Dynamic Reasoning Time up to 7 hours.

GPT-5.5 in Codex is not just a model upgrade — it is the culmination of a year of agentic infrastructure investment. The combination of computer use, memory, background scheduling, and now GPT-5.5 makes Codex the most capable developer agent available commercially.

Why It Matters: The Terminal-Bench score of 82.7% and Dynamic Reasoning Time of up to 7 hours translate directly into real-world capability: this model can handle end-to-end agentic tasks that were impossible six months ago. The NVIDIA partnership creates a feedback loop accelerating the entire AI stack.

GPT-5.5 Becomes Default Codex Model with 400K Context Window | AI Onboarded