We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about:
We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch. For example, we’re improving the model’s ability to detect and refuse certain content. We’re also working on improving the user experience and preparing our infrastructure to scale to millions while maintaining real-time responses.
As part of our iterative deployment strategy, we'll start the alpha with a small group of users to gather feedback and expand based on what we learn. We are planning for all Plus users to have access in the fall. Exact timelines depend on meeting our high safety and reliability bar. We are also working on rolling out the new video and screen sharing capabilities we demoed separately, and will keep you posted on that timeline.
ChatGPT’s advanced Voice Mode can understand and respond with emotions and non-verbal cues, moving us closer to real-time, natural conversations with AI. Our mission is to bring these new experiences to you thoughtfully.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Introducing shopping research, a new experience in ChatGPT that does the research to help you find the right products.
It’s everything you like about deep research but with an interactive interface to help you make smarter purchasing decisions.
Shopping research asks smart clarifying questions, researches deeply across the internet, reviews quality sources, and builds on ChatGPT’s understanding of you from past conversations and memory to deliver a personalized buyer’s guide in minutes.
Most neural networks today are dense and highly entangled, making it difficult to understand what each part is doing.
In our new research, we train “sparse” models—with fewer, simpler connections between neurons—to see whether their computations become easier to understand.
Unlike with normal models, we often find that we can pull out simple, understandable parts of our sparse models that perform specific tasks, such as ending strings correctly in code or tracking variable types.
We also show promising early signs that our method could potentially scale to understand more complex behaviors.
Today we’re introducing GDPval, a new evaluation that measures AI on real-world, economically valuable tasks.
Evals ground progress in evidence instead of speculation and help track how AI improves at the kind of work that matters most. openai.com/index/gdpval-v0
GDPval spans 44 occupations selected from the top 9 sectors contributing to U.S. Gross Domestic Product (GDP).
Tasks are constructed from the representative work of industry professionals with an average of 14 years of experience.