elvis Profile picture
Sep 6, 2025 7 tweets 3 min read Read on X
Everyone is talking about this new OpenAI paper.

It's about why LLMs hallucinate.

You might want to bookmark this one.

Let's break down the technical details: Image
Quick Overview

The paper argues that hallucinations are not mysterious glitches but the predictable result of how LLMs are trained and evaluated.

Pretraining creates statistical pressure to make errors, and post-training benchmarks often reward confident guessing over honest uncertainty.

The fix is to realign mainstream evaluations to stop penalizing abstentions.Image
Pretraining inevitably produces some errors

Even if you trained on flawless text, the way models learn guarantees they’ll still slip up sometimes.

That’s because the training goal pushes them to give answers instead of saying “I don’t know.”

The calibration histograms below illustrate that GPT-4 style base models are well calibrated prior to RL, consistent with this claim.Image
Arbitrary facts drive a floor on hallucinations.

Details like birthdays or one-off events show up rarely in training data. If a fact appears only once, the model is just as likely to guess wrong later.

So for these “one-shot facts,” hallucinations are baked in. Image
Weak models add to the problem.

When the model family cannot represent the needed distinctions, errors persist.

The paper formalizes this via an agnostic-learning bound and gives simple cases like multiple choice, where even optimal thresholding leaves a fixed error tied to model capacity, with an example showing classic n-gram models must fail on certain context dependencies.Image
Post-training often reinforces guessing

Most benchmarks score models only on right vs. wrong answers.

Saying “I don’t know” gets you zero, while making a confident guess could get you a point.

That system rewards bluffing, so models learn to “sound sure” even when they’re not.

The authors survey widely used leaderboards and find abstentions largely penalized, explaining why overconfident hallucinations persist despite mitigation efforts.Image
The fix is to reward honesty

The authors suggest changing benchmarks so models aren’t punished for admitting uncertainty.

If we add clear rules about when to guess and when to abstain, models will learn to only answer when they’re reasonably confident.

This promotes behavioral calibration, where models choose between answering and abstaining according to the target confidence, and should steer the field toward more trustworthy systems.

Paper:
cdn.openai.com/pdf/d04913be-3…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with elvis

elvis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @omarsar0

Jan 2
This worked better than I thought.

It's a slash command in Claude Code to write detailed specs.

The AskUserQuestion tool will drill you for even the smallest detail.

Great way to enhance vibe coding results.

Claude Code then creates a huge, detailed plan from it and executes it.Image
Usage: /spec-init <SPEC_DIR>

This is extremely useful for new projects, but it could be adapted easily to large features.

Or you can also start off with a SPEC of your own, as @trq212 shows here:

I just adopted it and built a slash command for reuse.
The spec-init slash command prompt, if you want to try it:

"Your task is to first help me build a spec for my new project in ARGUMENT.

Use the AskUserQuestion Tool to help build the spec in ARGUMENT by interviewing me and gathering requirements and details about the project implementation, UI & UX, tech stack, concerns, tradeoffs, etc.

Make sure questions are not obvious and probe deeper into the underlying needs and constraints.

Interview me continually and systematically until the spec is complete. Document all responses and insights to create a comprehensive and well-structured specification that serves as the foundation for the project."
Read 4 tweets
Dec 3, 2025
Lindy's Agent Builder is impressive!

It's one of the easiest ways to build powerful AI Agents.

Start with a prompt, iterate on tools, and end up with a working agent in minutes.

It doesn't get any easier than this.

Full walkthrough below with prompts, tips, and use case.
1️⃣ Start with a Prompt

You basically start with a simple prompt of what you want to build.

"Help me build a deep research agent that tracks the latest AI research papers on AI Agents."

That's it. You get your first working agent generated in minutes.
2️⃣ Agent Builder & Prompt Optimization

You can then iterate on your agent using the agent builder. Optimize prompts, add tools, and customize your agent as you see fit.

The agent prompt is optimized for you to fit your use case. That's very useful.
Read 6 tweets
Nov 24, 2025
This is insane! 🤯

Just built a new skill in Claude Code using Opus 4.5.

The skill uses Gemini 3 Pro (via API) for designing web pages.

Look at what it generated from one simple prompt.
If you have been designing websites with Claude Code, you already know how generic they turn out.

So I built a skill that uses Gemini 3 Pro to lead creative direction and generate designs. It is extremely good at this.

Opus 4.5 then integrates all that into our app. Image
The prompt I used: "I want to design the landing page for a new AI game. We want it to be futuristic and all that, and use animations as much as possible."

I will test with some other prompts and see how far I can push this. But the results are very exciting already.
Read 6 tweets
Nov 23, 2025
This is one of the most insane things Nano Banana Pro 🍌 can do.

It can reproduce figures with mind-blowing precision.

No competition in this regard!

Prompt: "Please reproduce this chart in high quality and fidelity and offer annotated labels to better understand it." Image
When I tried this for the first time, I didn't expect that this was possible.

The level of understanding this requires is what's remarkable about it all.

The levels of personalization this unlocks are also impressive.

"Can you convert it into a cartoonish version?" Image
Just look at this 🤯

"Can you create a delightful cartoonish version of this table. And please put cute colors and icons along with interesting annotations to make it more readable." Image
Read 6 tweets
Nov 22, 2025
It's finally ready for you all to try!

Have fun generating interesting insights from AI papers with Nano Banana Pro 🍌.

(bookmark it)

I find this to be a fun and interesting way to explore with Nano Banana Pro, as I can just select a part of the paper and ask away.

Try remixing figures, reproducing charts, annotating equations, explaining math, and much more.

I am polishing it some more and have other ideas, but let me know if you have feedback in the meantime.

Works better on Desktop.

…dair-ai-181664986325.us-west1.run.app
You can try it by downloading a paper from arXiv or uploading a book or any technical document.
If you don't have a PDF to try, just click on one of the example papers provided: Image
Read 9 tweets
Nov 10, 2025
This is a wild use case!

I used Gamma + n8n to automatically generate a complete presentation on AI Agents research.

In just minutes!

It combines web search (for research), GPT-5 (narrative), and Gamma (for slide content generation).

Full workflow breakdown below 👇
1/ THE PROBLEM:

Creating visual content is time-consuming. Research takes hours. Writing requires deep focus. Design demands specialized skills.

What if AI could handle the entire pipeline?
2/ THE SOLUTION:

An n8n workflow that orchestrates Tavily for web research, GPT-5 for storytelling, Gamma for visual generation, and Google Sheets for tracking.

You provide a topic and audience. The system outputs a LinkedIn-ready carousel.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(