Here is how we are prioritizing compute over the next couple of months in light of the increased demand from GPT-5:
1. We will first make sure that current paying ChatGPT users get more total usage than they did before GPT-5.
2. We will then prioritize API demand up to the currently allocated capacity and commitments we've made to customers. (For a rough sense, we can support about an additional ~30% new API growth from where we are today with this capacity.)
3. We will then increase the quality of the free tier of ChatGPT.
4. We will then prioritize new API demand.
We are ~doubling our compute fleet over the next 5 months (!) so this situation should get better.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We have raised a $110 billion round of funding from Amazon, NVIDIA, and SoftBank.
We are grateful for the support from our partners, and have a lot of work to do to bring you the tools you deserve.
We are excited to partner with Amazon to bring a new generation of products to market, especially around new enterprise products like the stateful runtime environment. We are also very excited to make great use of Tranium.
We continue to have a great relationship with Microsoft. Our stateless API will remain exclusive to Azure, and we will build out much more capacity with them.
first, GPT-5 in an integrated model, meaning no more model switcher and it decides when it needs to think harder or not.
it is very smart, intuitive, and fast.
it is available to everyone, including the free tier, w/reasoning!
evals aren't the most important thing--the most important thing is how useful we think the model will be--but it does well on evals. for example, a new high on SWE-bench and many other metrics.
it is by far our most reliable and factual model ever.
rolling out today for free, plus, pro, and team users. next week to enterprise and edu.
making this available in the free tier is a big deal to us; PhD-level intelligence for everyone!
it can go use the internet, do complex research and reasoning, and give you back a report.
it is really good, and can do tasks that would take hours/days and cost hundreds of dollars.
people will post lots of great examples, but here is a fun one:
i am in japan right now and looking for an old NSX. i spent hours searching unsuccessfully for the perfect one. i was about to give up and deep research just...found it.
it is very compute-intensive and slow, but it's the first ai system that can do such a wide variety of complex, valuable tasks.
going live in our pro tier now, with 100 queries per month.
plus, team, and enterprise will come soon, and then free tier.
here is o1, a series of our most capable and aligned models yet:
o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it. openai.com/index/learning…
but also, it is the beginning of a new paradigm: AI that can do general-purpose complex reasoning.
o1-preview and o1-mini are available today (ramping over some number of hours) in ChatGPT for plus and team users and our API for tier 5 users.
screenshot of eval results in the tweet above and more in the blog post, but worth especially noting:
a fine-tuned version of o1 scored at the 49th percentile in the IOI under competition conditions! and got gold with 10k submissions per problem.