OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We’re hiring: https://t.co/dJGr6Lg202
20 subscribers
Apr 16 • 5 tweets • 2 min read
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date.
For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.
OpenAI o3 is a powerful model across multiple domains, setting a new standard for coding, math, science, and visual reasoning tasks.
o4-mini is a remarkably smart model for its speed and cost-efficiency. This allows it to support significantly higher usage limits than o3, making it a strong high-volume, high-throughput option for everyone with questions that benefit from reasoning. openai.com/index/introduc…
Apr 10 • 4 tweets • 2 min read
Starting today, memory in ChatGPT can now reference all of your past chats to provide more personalized responses, drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond.
In addition to the saved memories that were there before, it can now reference your past chats to deliver responses that feel noticeably more relevant and useful.
New conversations naturally build upon what it already knows about you, making interactions feel smoother and uniquely tailored to you.
Feb 25 • 5 tweets • 1 min read
Deep research is now rolling out to all ChatGPT Plus, Team, Edu, and Enterprise users 🍾
Since the initial launch, we’ve made some improvements to deep research:
✅Embedded images with citations in the output
✅Better at understanding and referencing uploaded files
Feb 18 • 6 tweets • 2 min read
Today we’re launching SWE-Lancer—a new, more realistic benchmark to evaluate the coding performance of AI models. SWE-Lancer includes over 1,400 freelance software engineering tasks from Upwork, valued at $1 million USD total in real-world payouts. openai.com/index/swe-lanc…
SWE-Lancer tasks span the full engineering stack, from UI/UX to systems design, and include a range of task types, from $50 bug fixes to $32,000 feature implementations. SWE-Lancer includes both independent engineering tasks and management tasks, where models choose between technical implementation proposals.
Dec 20, 2024 • 5 tweets • 2 min read
Today, we shared evals for an early version of the next model in our o-model reasoning series: OpenAI o3
On several of the most challenging frontier evals, OpenAI o3 sets new milestones for what’s possible in coding, math, and scientific reasoning.
It also makes significant progress on the ARC-AGI evaluation for the first time.
ChatGPT can now work directly with more coding and note-taking apps—through voice or text—on macOS.
Work with your code in context with expanded support for coding apps like Warp, IntelliJ IDEA, PyCharm, and more.
Dec 16, 2024 • 4 tweets • 2 min read
🌐ChatGPT search 🌐is starting to roll out to all Free users today.
Search the web in a faster, better way—available globally on and our mobile and desktop apps for all logged-in users. chatgpt.com
Search with Advanced Voice in ChatGPT, rolling out over the next week.
Dec 11, 2024 • 4 tweets • 1 min read
ChatGPT is now integrated into Apple experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT’s capabilities right within the OS.
Siri with ChatGPT
Dec 5, 2024 • 6 tweets • 2 min read
OpenAI o1 is now out of preview in ChatGPT.
What’s changed since the preview? A faster, more powerful reasoning model that’s better at coding, math & writing.
o1 now also supports image uploads, allowing it to apply reasoning to visuals for more detailed & useful responses.
OpenAI o1 is more concise in its thinking, resulting in faster response times than o1-preview.
Our testing shows that o1 outperforms o1-preview, reducing major errors on difficult real-world questions by 34%.
Sep 24, 2024 • 6 tweets • 2 min read
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
It can also say “Sorry I’m late” in over 50 languages.
If you are a Plus or Team user, you will see a notification in the app when you have access to Advanced Voice.
Sep 19, 2024 • 11 tweets • 2 min read
Some favorite posts about OpenAI o1, as selected by researchers who worked on the model 🧵
We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.
These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. openai.com/index/introduc…
Rolling out today in ChatGPT to all Plus and Team users, and in the API for developers on tier 5.
Jul 30, 2024 • 5 tweets • 1 min read
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.
Users in this alpha will receive an email with instructions and a message in their mobile app. We'll continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall. As previously mentioned, video and screen sharing capabilities will launch at a later date.
May 13, 2024 • 23 tweets • 6 min read
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time:
Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks. openai.com/index/hello-gp…
Two GPT-4os interacting and singing
Apr 12, 2024 • 4 tweets • 2 min read
Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding.
Source: github.com/openai/simple-…
For example, when writing with ChatGPT, responses will be more direct, less verbose, and use more conversational language.
Feb 15, 2024 • 7 tweets • 3 min read
Introducing Sora, our text-to-video model.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.
Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”openai.com/sora
We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products.
We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who are adversarially testing the model.
Dec 18, 2023 • 5 tweets • 2 min read
We are systemizing our safety thinking with our Preparedness Framework, a living document (currently in beta) which details the technical and operational investments we are adopting to guide the safety of our frontier model development. openai.com/safety/prepare…
Our Preparedness Team will drive technical work, pushing the limits of our cutting edge models to run evaluations and closely monitor risks, including during training runs. Results will be synthesized in scorecards that track model risk.
Dec 14, 2023 • 4 tweets • 2 min read
In the future, humans will need to supervise AI systems much smarter than them.
We study an analogy: small models supervising large models.
Read the Superalignment team's first paper showing progress on a new approach, weak-to-strong generalization: openai.com/research/weak-…
Large pretrained models have excellent raw capabilities—but can we elicit these fully with only weak supervision?
GPT-4 supervised by ~GPT-2 recovers performance close to GPT-3.5 supervised by humans—generalizing to solve even hard problems where the weak supervisor failed!
Sep 27, 2023 • 4 tweets • 1 min read
ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021.
Since the original launch of browsing in May, we received useful feedback. Updates include following robots.txt and identifying user agents so sites can control how ChatGPT interacts with them.
Sep 25, 2023 • 4 tweets • 2 min read
ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). openai.com/blog/chatgpt-c…
Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate.
Sound on 🔊
Aug 31, 2022 • 9 tweets • 4 min read
DALL·E’s canvas just got bigger. Expand your creativity with Outpainting: openai.com/blog/dall-e-in…
"La Vie Lente" — OpenAI researcher Tyna Eloundou x DALL·E @ThankYourNiceAI