Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.
However, I think the external response has been somewhat overblown, especially in narratives around cost. One implication of having two paradigms (pre-training and reasoning) is that we can optimize for a capability over two axes instead of one, which leads to lower costs.
But it also means we have two axes along which we can scale, and we intend to push compute aggressively into both!
As research in distillation matures, we're also seeing that pushing on cost and pushing on capabilities are increasingly decoupled. The ability to serve at lower cost (especially at higher latency) doesn't imply the ability to produce better capabilities.
We will continue to improve our ability to serve models at lower cost, but we remain optimistic in our research roadmap, and will remain focused in executing on it. We're excited to ship better models to you this quarter and over the year!
• • •
Missing some Tweet in this thread? You can try to
force a refresh