Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.
Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”openai.com/sora
We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products.
We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who are adversarially testing the model.
Prompt: “Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.”
Prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”
Prompt: “A gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.”
Prompt: “Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. the art style is 3d and realistic, with a focus on lighting and texture. the mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with wide eyes and open mouth. its pose and expression convey a sense of innocence and playfulness, as if it is exploring the world around it for the first time. the use of warm colors and dramatic lighting further enhances the cozy atmosphere of the image.”
Prompt: “A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. she wears a black leather jacket, a long red dress, and black boots, and carries a black purse. she wears sunglasses and red lipstick. she walks confidently and casually. the street is damp and reflective, creating a mirror effect of the colorful lights. many pedestrians walk about.”
• • •
Missing some Tweet in this thread? You can try to
force a refresh
GPT-5.2 derived a new result in theoretical physics.
We’re releasing the result in a preprint with researchers from @the_IAS, @VanderbiltU, @Cambridge_Uni, and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions.
Gluons carry the strong nuclear force, which is the force that binds quarks together inside protons and neutrons.
Without the strong force, atomic nuclei would not exist.
It is one of the four fundamental forces of nature and a core part of the Standard Model of particle physics.
For decades, one specific gluon interaction (“single-minus” at tree level) was widely treated as having zero amplitude, meaning it was assumed not to occur.
When an amplitude is zero, physicists may ignore it. But this preprint shows that the conclusion is too strong: in a carefully defined situation — where the particles’ motions satisfy a specific alignment condition — the amplitude is not zero.
We worked with @Ginkgo to connect GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%.
GPT-5 was connected to an autonomous lab: it designed experiments, the lab executed them, and the results informed the next designs across six iterations.
In this setup, GPT-5 designed batches of experiments, the lab executed them, and the data fed back into the next round. We repeated that cycle six times, exploring 36,000+ reaction compositions across 580 automated plates.
We found that the improvements came from identifying combinations that work well together and that hold up in the realities of high-throughput automation.
GPT-5 identified low-cost reaction compositions that humans had not previously tested in this configuration. Cell-free protein synthesis (CFPS) has been studied for years, but the space of possible mixtures is still large. When you can propose and execute thousands of combinations quickly, you can find workable regions that are easy to miss with a manual workflow.
In the coming weeks, we plan to start testing ads in ChatGPT free and Go tiers.
We’re sharing our principles early on how we’ll approach ads–guided by putting user trust and transparency first as we work to make AI accessible to everyone.
What matters most:
- Responses in ChatGPT will not be influenced by ads.
- Ads are always separate and clearly labeled.
- Your conversations are private from advertisers.
- Plus, Pro, Business, and Enterprise tiers will not have ads.
Here's an example of what the first ad formats we plan to test could look like.
Introducing ChatGPT Health — a dedicated space for health conversations in ChatGPT. You can securely connect medical records and wellness apps so responses are grounded in your own health information.
Designed to help you navigate medical care, not replace it.
ChatGPT Health can help you navigate everyday questions and spot patterns over time, so you feel more informed, prepared, and confident for important medical conversations.
If you choose, ChatGPT Health lets you securely connect medical records and apps like Apple Health, MyFitnessPal, and Peloton to give personalized responses.
To preserve chain-of-thought (CoT) monitorability, we must be able to measure it.
We built a framework + evaluation suite to measure CoT monitorability — 13 evaluations across 24 environments — so that we can actually tell when models verbalize targeted aspects of their internal reasoning. openai.com/index/evaluati…
Monitoring a model’s chain-of-thought is far more effective than watching only its actions or final answers.
The more a model “thinks” (longer CoTs), the easier it is to spot issues.
RL at today’s frontier doesn’t seem to wreck monitorability and can help early reasoning steps. But there’s a tradeoff: smaller models run with higher reasoning effort can be easier to monitor at similar capability — at the cost of extra inference compute (a “monitorability tax”).