AI can now predict what you're thinking before you say it 🤯
New research from CMU introduces "Social World Models" - AI that doesn't just parse what people say, but predicts what they're thinking, what they'll do next, and how they'll react to your actions.
The breakthrough is S³AP (Social Simulation Analysis Protocol). Instead of feeding AI raw conversations, they structure social interactions like a simulation game - tracking who knows what, who believes what, and what everyone's mental state looks like at each moment.
The results are wild. On theory-of-mind tests, they jumped from 54% to 96% accuracy. But the real magic happens when these models start interacting.
The AI doesn't just respond anymore - it runs mental simulations first. "If I say this, how will they interpret it? What will they think I'm thinking? How does that change what I should actually say?"
This isn't just better chatbots. It's AI that can navigate office politics, understand when someone is lying, predict how a negotiation will unfold. AI that gets the subtext.
The researchers tested this on competitive vs cooperative scenarios. In competitive settings (like bargaining), the social world models helped even more - because modeling your opponent's mental state matters most when interests don't align.
Here's what's unsettling: the AI doesn't need to be the smartest model to build these social representations.
A smaller model can create the "mental maps" that help larger models reason better. Social intelligence might be more about representation than raw compute.
We're not just building AI that understands the world anymore. We're building AI that understands 'us'.
The key insight: humans navigate social situations by constantly running mental simulations. "If I say this, they'll think that, so I should actually say this other thing." AI has been missing this predictive layer entirely.
S³AP breaks down social interactions like a game engine.
Instead of messy dialogue, it tracks: who's in the room, what each person observed, what they're thinking internally, and what actions they take. Suddenly AI can follow the social physics.
The "Foresee and Act" algorithm is where it gets scary good. Before responding, the AI simulates how the other person will interpret its message, then optimizes for the actual goal.
It's not just reactive anymore - it's strategically predictive.
Tested on competitive negotiations vs cooperative tasks. The social world models helped more in competitive settings. Makes sense - when interests align, you can be direct.
When they don't, modeling the other person's mental state becomes critical.
What's wild: the AI that builds the best "mental maps" isn't necessarily the smartest overall model. Social intelligence might be more about representation than raw compute.
We're learning that understanding minds has different requirements than understanding physics.
I reverse-engineered how top PMs at Google, Meta, and Anthropic use it.
The difference is night and day.
Here are 10 prompts they don't want you to know (but I'm sharing anyway):
1. PRD Generation from Customer Calls
I used to spend 6 hours turning messy customer interviews into structured PRDs.
Now I just dump the transcript into Claude with this:
Prompt:
---
You are a senior PM at [COMPANY]. Analyze this customer interview transcript and create a PRD with:
1. Problem statement (what pain points did the customer express in their own words?) 2. User stories (3-5 stories in "As a [user], I want [goal] so that [benefit]" format) 3. Success metrics (what would make this customer renew/upgrade?) 4. Edge cases the customer implied but didn't directly state
Be ruthlessly specific. Quote the customer directly when identifying problems.
---
2. Competitive Analysis with Actual Strategy
Most PMs just list competitor features in a spreadsheet like it's 2015 haha.
Here's how I get Claude to actually think like a competitive analyst:
Prompt:
---
You are a competitive intelligence analyst
Analyze [COMPETITOR] and answer:
- What job are customers hiring them to do? (not what features they have)
- Where are they vulnerable? (what complaints appear in G2/Reddit/Twitter?)
- What would you build to win their customers in the next 6 months?
- No generic "they have good UX" observations
- Only insights backed by public data you can cite
- Recommend 2-3 specific features we should build, with reasoning
I've written 500 articles, 23 whitepapers, and 3 ebooks using Claude over 2 years, and these 10 prompts are the ONLY ones I actually use anymore because they handle 90% of professional writing better than any human editor I've worked with and cost me $0.02 per 1000 words: 👇
1. The 5-Minute First Draft
Prompt:
"Turn these rough notes into an article:
[paste your brain dump]
Target length: [800/1500/3000] words
Audience: [describe reader]
Goal: [inform/persuade/teach]
Keep my ideas and examples. Fix structure and flow."
2. Headline Machine (Steal This)
Prompt:
"Topic: [your topic]
Write 20 headlines using these formulas:
- How to [benefit] without [pain point]
- [Number] ways [audience] can [outcome]
- The [adjective] guide to [topic]
- Why [common belief] is wrong about [topic]
- [Do something] like [authority figure]
- I [did thing] and here's what happened
- What [success case] knows about [topic] that you don't