Here's my take on the Sora technical report, with a good dose of speculation that could be totally off. First of all, really appreciate the team for sharing helpful insights and design decisions – Sora is incredible and is set to transform the video generation community.
What we have learned so far:
- Architecture: Sora is built on our diffusion transformer (DiT) model (published in ICCV 2023) — it's a diffusion model with a transformer backbone, in short:
DiT = [VAE encoder + ViT + DDPM + VAE decoder].
According to the report, it seems there are not much additional bells and whistles.
- "Video compressor network": Looks like it's just a VAE but trained on raw video data. Tokenization probably plays a significant role in getting good temporal consistency. By the way, VAE is a ConvNet, so DiT technically is a hybrid model ;) (1/n)
When Bill and I were working on the DiT project, instead of creating novelty (see my last tweet🤷♂️), we prioritized two aspects: simplicity and scalability. These priorities offer more than just conceptual advantages.
- Simplicity means flexibility. The cool thing about vanilla ViT that people often miss is how it makes your model way more flexible when it comes to working with input data. For example, in masked autoencoder (MAE), ViT helped us to just process the visible patches and ignore the masked ones. And similarly, Sora "can control the size of generated videos by arranging randomly-initialized patches in an appropriately-sized grid." UNet does not directly offer this flexibility.
👀Speculation: Sora might also use Patch n’ Pack (NaViT) from Google, to make DiT adaptable to variable resolutions/durations/aspect ratios.
- Scalability is the core theme of the DiT paper. First, an optimized DiT runs much faster than UNet in terms of wall-clock time per Flop. More importantly, Sora demonstrated that the DiT scaling law applies not just to images but now to videos as well -- Sora replicates the visual scaling behavior observed in DiT.
👀Speculation: In the Sora report, the quality for the first video is quite bad, I suspect it is using a base model size. A back-of-the-envelope calculation: DiT XL/2 is 5X GFLOPs of the B/2 model, so the final 16X compute model is probably 3X DiT-XL model size, which means Sora might have ~3B parameters – if true, this is not an unreasonable model size . It could suggest that training the Sora model might not require as many GPUs as one would anticipate – I would expect very fast iterations going forward. (2/n)
The key takeaway is from the "Emerging simulation capabilities" section. Before Sora, it was unclear if long form consistency could emerge on its own or if it required complex subject-driven generation pipelines or even physics simulators. OpenAI has shown that, though not perfect, these behaviors can be achieved with end-to-end training. Yet, two essential points have not been discussed.
1. Training Data: No talk about training source and construction at all, which might just imply data is likely the most critical factor for Sora's success.
👀Speculations: There's already much speculation about data from game engines. I also anticipate the inclusion of movies, documentaries, cinematic long takes, etc. Quality really matters. Super curious where Sora got this data from (surely not YouTube, right?).
2. (Auto-regressive) Long Video Generation: a significant breakthrough in Sora is the ability to generate very long videos. The difference between producing a 2-second video and a 1-minute video is monumental.
In Sora, this is probably achieved through joint frame prediction that allows auto-regressive sampling, yet a major challenge is how to address error accumulation and maintain quality/consistency through time. A very long (and bi-directional) context for conditioning? Or could scaling up simply lessen the issue? These technical details can be super important and hopefully will be demystified in the future (3/n)
#shamelessplug DiT shines in Sora. Our team at NYU has recently released a new DiT model, called SiT. It has exactly the same architecture, but offers enhanced performance and faster convergence. Super curious about its performance on video generation too! (n/n)
Video understanding is the next frontier, but not all videos are alike. Models now reason over youtube clips and feature films, but what about the everyday spaces we—and our future AI assistants—navigate and experience?
Introducing Thinking in Space, our latest study exploring how multimodal LLMs see, remember and recall spaces. 🧵[1/n] vision-x-nyu.github.io/thinking-in-sp…
In vision, we handle space but rarely reason; multimodal LLMs think but often ignore spatial logic. Yet as humans—from taking a mental rotation test or picking out furniture for a new home—we rely on spatial and visual thinking that doesn’t always translate well into words. [2/n]
We explore this by studying a new benchmark covering various visual-spatial intelligence tasks (both relational and metric). Video is a natural medium—it mirrors how we experience the world, and demands longer-form reasoning (as well as world modeling).
So, how did we actually get the data and annotations? Building on prior computer vision work, we repurpose existing space-scan videos (originally intended for 3D recon) and use their ground-truth annotations to automatically generate VQA questions. Humans remain in the loop for quality control. [3/n]
Representation matters.
Representation matters.
Representation matters, even for generative models.
We might've been training our diffusion models the wrong way this whole time. Meet REPA: Training Diffusion Transformers is easier than you think! (🧵1/n)sihyun.me/REPA/
People (in academia) always tell me that training DiTs/SiTs is way too hard because it takes 7M iters and weeks to get the FID we reported in the paper. We figured out how to speed up training by ~18X, hitting even better FID in less than 400K iters. We did this by digging into the representation learned from diffusion models (2/n).
Some key observations:
1⃣ As many have noticed recently, diffusion transformers can produce reasonable representations, and better generative models lead to stronger representations.
2⃣ However, these are still much weaker than sota visual representations learned through SSL methods like DINOv2, JEPA or MAE.
3⃣ When we measure the alignment between diffusion features and DINOv2, the diffusion model makes steady progress throughout training, but it’s a slow climb. (3/n)
Introducing Cambrian-1, a fully open project from our group at NYU. The world doesn't need another MLLM to rival GPT-4V. Cambrian is unique as a vision-centric exploration & here's why I think it's time to shift focus from scaling LLMs to enhancing visual representations.🧵[1/n]
From our previous projects (MMVP, V*, VIRL), we've noticed unexpected visual shortcomings in current MLLM systems. While we can temporarily fix issues by e.g. adding data, one root problem is that our visual representations are not yet sufficient for language understanding.
In the short term, projects like Astra and GPT-4o are impressive. However, to develop a reliable multimodal assistant that perceives the real world like humans, manages complex tasks robustly, and acts accordingly, weak sensory grounding will likely become a bottleneck.
Language priors are powerful, but we shouldn't use them as crutches (quoting @ylecun) to compensate for deficiencies in visual representations. [2/n]
(🤷Now a bit of rant) The real issue is that working on visual representation learning is quite challenging right now. While CLIP-based models, which are strongly supervised by language, have proven to be effective, they come with their own set of problems, such as attribute binding. These models have been around for a while, and it's surprising that we haven't seen any major advancements.
On the other hand, vision SSL models are impressive, but the traditional evaluation protocols (like linear probing or transferring to object detection) are no longer effective. They have become outdated and disconnected from current applications, making a lot of people think vision SSL has hit a wall.
Nevertheless, I firmly believe that we should continue to push forward. CLIP/SigLIP models are great, but we need to diversify our approaches and keep exploring new possibilities instead of settling and claiming victory. (I'm sure @giffmana who has explored new approaches like CapPa, would agree with this perspective as well.)
This situation is reminiscent of 2015-2016 when ImageNet supervised pre-training was deemed unbeatable, with other visual representations trailing by at least 10-15%. However, this did not deter researchers from exploring diverse approaches and pre-text tasks. It wasn't until several years later that MoCo demonstrated the potential to surpass a supervised pre-trained model. [3/n]
🔍Introducing V*: exploring guided visual search in multimodal LLMs
MLLMs like GPT4V & LLaVA are amazing, but one concern that keeps me up at night: the (frozen) visual encoder typically extracts global image tokens *only once*, regardless of resolution or scene complexity (1/n)
Why does this matter? Consider everyday situations like locating keys on a cluttered table or spotting a friend in a crowd: we engage our system II and actively *search* for the necessary visual info -- we do not have an 'internal CLIP' that shows us everything all at once. (2/n)
This goes beyond just theoretical concerns; the missing mechanism causes failures in multimodal LLMs. In the following VQA examples, even GPT4V struggles and hallucinates answers. But there's a solution: our model (SEAL) can accurately answer them, thanks to the V* search. (3/n)