We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about:
We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch. For example, we’re improving the model’s ability to detect and refuse certain content. We’re also working on improving the user experience and preparing our infrastructure to scale to millions while maintaining real-time responses.
As part of our iterative deployment strategy, we'll start the alpha with a small group of users to gather feedback and expand based on what we learn. We are planning for all Plus users to have access in the fall. Exact timelines depend on meeting our high safety and reliability bar. We are also working on rolling out the new video and screen sharing capabilities we demoed separately, and will keep you posted on that timeline.
ChatGPT’s advanced Voice Mode can understand and respond with emotions and non-verbal cues, moving us closer to real-time, natural conversations with AI. Our mission is to bring these new experiences to you thoughtfully.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We worked with @Ginkgo to connect GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%.
GPT-5 was connected to an autonomous lab: it designed experiments, the lab executed them, and the results informed the next designs across six iterations.
In this setup, GPT-5 designed batches of experiments, the lab executed them, and the data fed back into the next round. We repeated that cycle six times, exploring 36,000+ reaction compositions across 580 automated plates.
We found that the improvements came from identifying combinations that work well together and that hold up in the realities of high-throughput automation.
GPT-5 identified low-cost reaction compositions that humans had not previously tested in this configuration. Cell-free protein synthesis (CFPS) has been studied for years, but the space of possible mixtures is still large. When you can propose and execute thousands of combinations quickly, you can find workable regions that are easy to miss with a manual workflow.
In the coming weeks, we plan to start testing ads in ChatGPT free and Go tiers.
We’re sharing our principles early on how we’ll approach ads–guided by putting user trust and transparency first as we work to make AI accessible to everyone.
What matters most:
- Responses in ChatGPT will not be influenced by ads.
- Ads are always separate and clearly labeled.
- Your conversations are private from advertisers.
- Plus, Pro, Business, and Enterprise tiers will not have ads.
Here's an example of what the first ad formats we plan to test could look like.
Introducing ChatGPT Health — a dedicated space for health conversations in ChatGPT. You can securely connect medical records and wellness apps so responses are grounded in your own health information.
Designed to help you navigate medical care, not replace it.
ChatGPT Health can help you navigate everyday questions and spot patterns over time, so you feel more informed, prepared, and confident for important medical conversations.
If you choose, ChatGPT Health lets you securely connect medical records and apps like Apple Health, MyFitnessPal, and Peloton to give personalized responses.
To preserve chain-of-thought (CoT) monitorability, we must be able to measure it.
We built a framework + evaluation suite to measure CoT monitorability — 13 evaluations across 24 environments — so that we can actually tell when models verbalize targeted aspects of their internal reasoning. openai.com/index/evaluati…
Monitoring a model’s chain-of-thought is far more effective than watching only its actions or final answers.
The more a model “thinks” (longer CoTs), the easier it is to spot issues.
RL at today’s frontier doesn’t seem to wreck monitorability and can help early reasoning steps. But there’s a tradeoff: smaller models run with higher reasoning effort can be easier to monitor at similar capability — at the cost of extra inference compute (a “monitorability tax”).
Accelerating scientific progress is one of the most impactful ways AI can benefit society. Models can already help researchers reason through hard problems — but doing this well means testing models on tougher evaluations and in real scientific workflows grounded in experiments.
We’re releasing a new eval to measure expert-level scientific reasoning: FrontierScience.
This benchmark measures PhD-level scientific reasoning across physics, chemistry, and biology.
It contains hard, expert-written questions (both olympiad-style problems and longer research-style tasks) designed to reveal where models succeed and where they fall short. openai.com/index/frontier…
GPT-5.2 is our strongest model on the FrontierScience eval, showing clear gains on hard scientific tasks.
But the benchmark also reveals a gap between strong performance on structured problems and the open-ended, iterative reasoning that real research requires.