Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.
Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”openai.com/sora
We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products.
We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who are adversarially testing the model.
Prompt: “Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.”
Prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”
Prompt: “A gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.”
Prompt: “Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. the art style is 3d and realistic, with a focus on lighting and texture. the mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with wide eyes and open mouth. its pose and expression convey a sense of innocence and playfulness, as if it is exploring the world around it for the first time. the use of warm colors and dramatic lighting further enhances the cozy atmosphere of the image.”
Prompt: “A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. she wears a black leather jacket, a long red dress, and black boots, and carries a black purse. she wears sunglasses and red lipstick. she walks confidently and casually. the street is damp and reflective, creating a mirror effect of the colorful lights. many pedestrians walk about.”
• • •
Missing some Tweet in this thread? You can try to
force a refresh
To preserve chain-of-thought (CoT) monitorability, we must be able to measure it.
We built a framework + evaluation suite to measure CoT monitorability — 13 evaluations across 24 environments — so that we can actually tell when models verbalize targeted aspects of their internal reasoning. openai.com/index/evaluati…
Monitoring a model’s chain-of-thought is far more effective than watching only its actions or final answers.
The more a model “thinks” (longer CoTs), the easier it is to spot issues.
RL at today’s frontier doesn’t seem to wreck monitorability and can help early reasoning steps. But there’s a tradeoff: smaller models run with higher reasoning effort can be easier to monitor at similar capability — at the cost of extra inference compute (a “monitorability tax”).
Accelerating scientific progress is one of the most impactful ways AI can benefit society. Models can already help researchers reason through hard problems — but doing this well means testing models on tougher evaluations and in real scientific workflows grounded in experiments.
We’re releasing a new eval to measure expert-level scientific reasoning: FrontierScience.
This benchmark measures PhD-level scientific reasoning across physics, chemistry, and biology.
It contains hard, expert-written questions (both olympiad-style problems and longer research-style tasks) designed to reveal where models succeed and where they fall short. openai.com/index/frontier…
GPT-5.2 is our strongest model on the FrontierScience eval, showing clear gains on hard scientific tasks.
But the benchmark also reveals a gap between strong performance on structured problems and the open-ended, iterative reasoning that real research requires.
GPT-5.2 Instant, Thinking, and Pro are rolling out today, starting with Plus, Pro, Business, and Enterprise plans. Free and Go users will get access tomorrow.
Introducing shopping research, a new experience in ChatGPT that does the research to help you find the right products.
It’s everything you like about deep research but with an interactive interface to help you make smarter purchasing decisions.
Shopping research asks smart clarifying questions, researches deeply across the internet, reviews quality sources, and builds on ChatGPT’s understanding of you from past conversations and memory to deliver a personalized buyer’s guide in minutes.
Most neural networks today are dense and highly entangled, making it difficult to understand what each part is doing.
In our new research, we train “sparse” models—with fewer, simpler connections between neurons—to see whether their computations become easier to understand.
Unlike with normal models, we often find that we can pull out simple, understandable parts of our sparse models that perform specific tasks, such as ending strings correctly in code or tracking variable types.
We also show promising early signs that our method could potentially scale to understand more complex behaviors.