Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time:
Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks. openai.com/index/hello-gp…
Two GPT-4os interacting and singing
Realtime translation with GPT-4o
Lullabies and whispers with GPT-4o
Happy birthday with GPT-4o
@BeMyEyes with GPT-4o
Dad jokes with GPT-4o
Meeting AI with GPT-4o
Sarcasm with GPT-4o
Math problems with GPT-4o and @khanacademy
Point and learn Spanish with GPT-4o
Rock, Paper, Scissors with GPT-4o
Harmonizing with two GPT-4os
Interview prep with GPT-4o
Fast counting with GPT-4o
Dog meets GPT-4o
Live demo of GPT-4o realtime conversational speech
Live demo of GPT-4o voice variation
Live demo of GPT-4o vision
Live demo of coding assistance and desktop app
Live audience request for GPT-4o realtime translation
Live audience request for GPT-4o vision capabilities
All users will start to get access to GPT-4o today. In coming weeks we’ll begin rolling out the new voice and vision capabilities we demo’d today to ChatGPT Plus.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
GPT-5.2 Instant, Thinking, and Pro are rolling out today, starting with Plus, Pro, Business, and Enterprise plans. Free and Go users will get access tomorrow.
Introducing shopping research, a new experience in ChatGPT that does the research to help you find the right products.
It’s everything you like about deep research but with an interactive interface to help you make smarter purchasing decisions.
Shopping research asks smart clarifying questions, researches deeply across the internet, reviews quality sources, and builds on ChatGPT’s understanding of you from past conversations and memory to deliver a personalized buyer’s guide in minutes.
Most neural networks today are dense and highly entangled, making it difficult to understand what each part is doing.
In our new research, we train “sparse” models—with fewer, simpler connections between neurons—to see whether their computations become easier to understand.
Unlike with normal models, we often find that we can pull out simple, understandable parts of our sparse models that perform specific tasks, such as ending strings correctly in code or tracking variable types.
We also show promising early signs that our method could potentially scale to understand more complex behaviors.