These models are the same post-trained o3 and o4-mini models that power deep research in ChatGPT.
They also support MCP (search/fetch) and Code Interpreter.
And instead of polling, webhooks now let you receive notifications for certain API events—such as completed responses, fine-tuning jobs, and batch jobs.
We recommend using webhooks for long-horizon tasks (like deep research!) to improve reliability.
Centrally manage, version, and optimize prompts. Use (and reuse!) prompts across the Playground, API, Evals, and Stored Completions.
🆕 The Prompt object is a new resource that can be referenced in the Responses API and our SDKs. Prompts can be preconfigured too—including tools, models, and messages. No more manual copying and pasting!
And in the Playground:
⚡️️We’ve improved the “Generate” button. Now, hit “Optimize”, and your prompt will be even better optimized for the API.
🔡 Replacing “presets”, you can now save Prompts—including its versions and configurations—and reuse or share them.
You can now use tools and Structured Outputs when completing eval runs, and evaluate tool calls based on the arguments passed and responses returned. This supports tools that are OpenAI-hosted, MCP, and non-hosted. Read more in our guides below.