Codex is getting easier to automate and customize around your code.
🪝 Hooks customize the Codex loop with scripts that run at key points in a task:
• Run validators before or after work
• Scan prompts for secrets
• Log conversations to internal systems
• Create memories or customize behavior by repo or directory
⚙️ Programmatic access tokens provide scoped credentials for Business and Enterprise teams:
• Create tokens from ChatGPT workspace settings
• Use them in CI, release workflows, and internal automations
• Set expirations or revoke access when needed
• Keep usage tied back to the workspace
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today we’re announcing Open Responses: an open-source spec for building multi-provider, interoperable LLM interfaces built on top of the original OpenAI Responses API.
✅ Multi-provider by default
✅ Useful for real-world workflows
✅ Extensible without fragmentation
Build agentic systems without rewriting your stack for every model: openresponses.org
You can now get more Codex usage from your plan and credits with three updates today:
1️⃣ GPT-5-Codex-Mini — a more compact and cost-efficient version of GPT-5-Codex
2️⃣ 50% higher rate limits for ChatGPT Plus, Business, and Edu
3️⃣ Priority processing for ChatGPT Pro and Enterprise
GPT-5-Codex-Mini allows roughly 4x more usage than GPT-5-Codex, at a slight capability tradeoff due to the more compact model.
Available in the CLI and IDE extension when you sign in with ChatGPT, with API support coming soon.
Select GPT-5-Codex-Mini for easier tasks or to extend usage when you’re close to hitting rate limits.
Codex will also suggest switching to it when you reach 90% of your limits, so you can work longer without interruptions.