We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about:
We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch. For example, we’re improving the model’s ability to detect and refuse certain content. We’re also working on improving the user experience and preparing our infrastructure to scale to millions while maintaining real-time responses.
As part of our iterative deployment strategy, we'll start the alpha with a small group of users to gather feedback and expand based on what we learn. We are planning for all Plus users to have access in the fall. Exact timelines depend on meeting our high safety and reliability bar. We are also working on rolling out the new video and screen sharing capabilities we demoed separately, and will keep you posted on that timeline.
ChatGPT’s advanced Voice Mode can understand and respond with emotions and non-verbal cues, moving us closer to real-time, natural conversations with AI. Our mission is to bring these new experiences to you thoughtfully.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
gpt-oss-120b matches OpenAI o4-mini on core benchmarks and exceeds it in narrow domains like competitive math or health-related questions, all while fitting on a single 80GB GPU (or high-end laptop).
gpt-oss-20b fits on devices as small as 16GB, while matching or exceeding OpenAI o3-mini.
These models are trained for agentic workflows—supporting function calling, web search, Python execution, configurable reasoning effort, and full raw chain-of-thought access. github.com/openai/gpt-oss
ChatGPT can now do work for you using its own computer.
Introducing ChatGPT agent—a unified agentic system combining Operator’s action-taking remote browser, deep research’s web synthesis, and ChatGPT’s conversational strengths.
ChatGPT agent starts rolling out today to Pro, Plus, and Team users.
Pro users will get access by the end of day, while Plus and Team users will get access over the next few days.
ChatGPT can now connect to more internal sources & pull in real-time context—keeping existing user-level permissions.
Connectors available in deep research for Plus & Pro users (excl. EEA, CH, UK) and Team, Enterprise & Edu users:
Outlook
Teams
Google Drive
Gmail
Linear
& more
Additional connectors available in ChatGPT for Team, Enterprise, and Edu users:
SharePoint
Dropbox
Box
Workspace admins can also now build custom deep research connectors using Model Context Protocol (MCP) in beta.
MCP lets you connect proprietary systems and other apps so your team can search, reason, and act on that knowledge alongside web results and pre-built connectors.
Available to Team, Enterprise, and Edu admins, and Pro users starting today.
We're excited to announce we’ve launched several improvements to ChatGPT search, and today we’re starting to roll out a better shopping experience.
Search has become one of our most popular & fastest growing features, with over 1 billion web searches just in the past week 🧵
Shopping
We’re experimenting with making shopping simpler and faster to find, compare, and buy products in ChatGPT.
✅ Improved product results
✅ Visual product details, pricing, and reviews
✅ Direct links to buy
Product results are chosen independently and are not ads.
These shopping improvements are starting to roll out today to Plus, Pro, Free, and logged-out users everywhere ChatGPT is available. It will take a few days to complete the rollout.
Search in WhatsApp
You can now send a WhatsApp message to 1-800-ChatGPT (+1-800-242-8478) to get up-to-date answers and live sports scores.
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date.
For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.
OpenAI o3 is a powerful model across multiple domains, setting a new standard for coding, math, science, and visual reasoning tasks.
o4-mini is a remarkably smart model for its speed and cost-efficiency. This allows it to support significantly higher usage limits than o3, making it a strong high-volume, high-throughput option for everyone with questions that benefit from reasoning. openai.com/index/introduc…
OpenAI o3 and o4-mini are our first models to integrate uploaded images directly into their chain of thought.