OpenAI has announced a new version of its programming-focused AI model, GPT-5.1-Codex-Max, which is designed to handle larger and more complex coding tasks. The new model will be available tomorrow for ChatGPT Plus, Pro, Business, Edu, and Enterprise users, replacing the previous GPT-5.1-Codex as the recommended model for agentic coding.
The standout feature of Codex Max is its improved context handling through a process called “compaction.” This allows the model to shrink or compress parts of a conversation or code context when its memory fills up, enabling it to “coherently work over millions of tokens in a single task.” OpenAI claims this allows Codex Max to work on a single assignment for up to 24 hours, making it suitable for massive system-wide refactors.
In addition to handling larger workloads, the model is more efficient. According to OpenAI, Codex Max maintains the same accuracy as its predecessor on the SWE-Bench Verified evaluation but uses 30% fewer thinking tokens and runs 27% to 42% faster on real-world tasks. This efficiency could translate to longer usage times for users on capped plans.
The new model is also the first from OpenAI to be specifically trained for Windows environments, improving its collaboration capabilities in the Windows command-line interface (CLI). On the security front, OpenAI notes that Codex Max’s sustained reasoning capabilities enhance its cybersecurity monitoring, though the company still recommends keeping the AI in a restricted-access mode with disabled network access to prevent prompt-injection risks.





