These models are the same post-trained o3 and o4-mini models that power deep research in ChatGPT.
They also support MCP (search/fetch) and Code Interpreter.
And instead of polling, webhooks now let you receive notifications for certain API events—such as completed responses, fine-tuning jobs, and batch jobs.
We recommend using webhooks for long-horizon tasks (like deep research!) to improve reliability.
Centrally manage, version, and optimize prompts. Use (and reuse!) prompts across the Playground, API, Evals, and Stored Completions.
🆕 The Prompt object is a new resource that can be referenced in the Responses API and our SDKs. Prompts can be preconfigured too—including tools, models, and messages. No more manual copying and pasting!
And in the Playground:
⚡️️We’ve improved the “Generate” button. Now, hit “Optimize”, and your prompt will be even better optimized for the API.
🔡 Replacing “presets”, you can now save Prompts—including its versions and configurations—and reuse or share them.
You can now use tools and Structured Outputs when completing eval runs, and evaluate tool calls based on the arguments passed and responses returned. This supports tools that are OpenAI-hosted, MCP, and non-hosted. Read more in our guides below.
1. Codex is rolling out to ChatGPT Plus users today. It includes generous usage limits for a limited time, but during periods of high demand, we might set rate limits for Plus users so that Codex remains widely available.
2. Next, our top requested feature: You can now give Codex access to the internet during task execution to install base dependencies, run tests that need external resources, upgrade or install packages needed to build new features, and more.
3. Internet access is off by default, and can be enabled when creating a new environment or by editing an existing one. You have full control over the domains and HTTP methods Codex can use during task execution. Learn more about usage and risks in the docs: platform.openai.com/docs/codex/age…