You can now get more Codex usage from your plan and credits with three updates today:
1️⃣ GPT-5-Codex-Mini — a more compact and cost-efficient version of GPT-5-Codex
2️⃣ 50% higher rate limits for ChatGPT Plus, Business, and Edu
3️⃣ Priority processing for ChatGPT Pro and Enterprise
GPT-5-Codex-Mini allows roughly 4x more usage than GPT-5-Codex, at a slight capability tradeoff due to the more compact model.
Available in the CLI and IDE extension when you sign in with ChatGPT, with API support coming soon.
Select GPT-5-Codex-Mini for easier tasks or to extend usage when you’re close to hitting rate limits.
Codex will also suggest switching to it when you reach 90% of your limits, so you can work longer without interruptions.
We’ve also landed efficiency improvements to get more out of our GPUs.
ChatGPT Plus, Business and Edu users get 50% higher rate limits as a result, and Pro and Enterprise accounts get priority processing for maximum speed.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
These models are the same post-trained o3 and o4-mini models that power deep research in ChatGPT.
They also support MCP (search/fetch) and Code Interpreter.
And instead of polling, webhooks now let you receive notifications for certain API events—such as completed responses, fine-tuning jobs, and batch jobs.
We recommend using webhooks for long-horizon tasks (like deep research!) to improve reliability.
Centrally manage, version, and optimize prompts. Use (and reuse!) prompts across the Playground, API, Evals, and Stored Completions.
🆕 The Prompt object is a new resource that can be referenced in the Responses API and our SDKs. Prompts can be preconfigured too—including tools, models, and messages. No more manual copying and pasting!
And in the Playground:
⚡️️We’ve improved the “Generate” button. Now, hit “Optimize”, and your prompt will be even better optimized for the API.
🔡 Replacing “presets”, you can now save Prompts—including its versions and configurations—and reuse or share them.
You can now use tools and Structured Outputs when completing eval runs, and evaluate tool calls based on the arguments passed and responses returned. This supports tools that are OpenAI-hosted, MCP, and non-hosted. Read more in our guides below.