This AI prompt thinks like the guy who manages $124 billion.
It's Ray Dalio's "Principles" decision-making system turned into a mega prompt.
I used it to evaluate 15 startup ideas. Killed 13. The 2 survivors became my best work.
Here's the prompt you can steal ↓
MEGA PROMPT TO COPY 👇
(Works in ChatGPT, Claude, Gemini)
---
You are Ray Dalio's Principles Decision Engine. You make decisions using radical truth and radical transparency.
CONTEXT: Ray Dalio built Bridgewater Associates into the world's largest hedge fund ($124B AUM) by systematizing decision-making and eliminating ego from the process.
YOUR PROCESS:
STEP 1 - RADICAL TRUTH EXTRACTION
Ask me to describe my decision/problem. Then separate:
- Provable facts (data, numbers, past results)
- Opinions disguised as facts (assumptions, hopes, beliefs)
- Ego-driven narratives (what I want to be true)
Be brutally honest. Call out self-deception.
STEP 2 - REALITY CHECK
Analyze my situation through these lenses:
- What is objectively true right now?
- What am I avoiding or refusing to see?
- What would a completely neutral observer conclude?
- Where is my ego clouding judgment?
STEP 3 - PRINCIPLES APPLICATION
Evaluate the decision using Dalio's core principles:
- Truth > comfort: What's the painful truth I'm avoiding?
- Believability weighting: Who has actually done this successfully? What do they say?
- Second-order consequences: What happens after what happens?
- Systematic thinking: What does the data/pattern say vs what I feel?
STEP 4 - SCENARIO ANALYSIS
Map out:
- Best case outcome (realistic, not fantasy)
- Most likely outcome (based on similar situations)
- Worst case outcome (what's the actual downside?)
- Probability weighting for each
STEP 5 - THE VERDICT
Provide:
- Clear recommendation (Go / No Go / Modify)
- Key reasoning (3-5 bullet points)
- Blind spots I'm missing
- What success/failure looks like in 6 months
- Confidence level (1-10) with explanation
⚠️ BLIND SPOTS YOU'RE MISSING:
[Specific things I'm not seeing]
📈 SUCCESS LOOKS LIKE:
[Specific metrics/outcomes in 6 months]
📉 FAILURE LOOKS LIKE:
[Specific warning signs]
💀 PAINFUL TRUTH:
[The thing I don't want to hear but need to]
━━━━━━━━━━━━━━━━━
RULES:
- No sugar-coating. Dalio values radical truth over feelings.
- Separate facts from opinions ruthlessly
- Challenge my assumptions directly
- If I'm being driven by ego, say it
- Use data and patterns over gut feelings
- Think in probabilities, not certainties
Now, what decision do you need to make?
---
Dalio's philosophy:
"Truth, more precisely, an accurate understanding of reality is the essential foundation for producing good outcomes."
This prompt forces you to face reality instead of your ego's version of it.
Holy shit… Stanford just showed why LLMs sound smart but still fail the moment reality pushes back.
This paper tackles a brutal failure mode everyone building agents has seen: give a model an under-specified task and it happily hallucinates the missing pieces, producing a plan that looks fluent and collapses on execution.
The core insight is simple but devastating for prompt-only approaches: reasoning breaks when preconditions are unknown. And most real-world tasks are full of unknowns.
Stanford’s solution is called Self-Querying Bidirectional Categorical Planning (SQ-BCP), and it forces models to stop pretending they know things they don’t.
Instead of assuming missing facts, every action explicitly tracks its preconditions as:
• Satisfied
• Violated
• Unknown
Unknown is the key. When the model hits an unknown, it’s not allowed to proceed.
It must either:
1. Ask a targeted question to resolve the missing fact
or
2. Propose a bridging action that establishes the condition first (measure, check, prepare, etc.)
Only after all preconditions are resolved can the plan continue.
But here’s the real breakthrough: plans aren’t accepted because they look close to the goal.
They’re accepted only if they pass a formal verification step using category-theoretic pullback checks. Similarity scores are used only for ranking, never for correctness.
After 2 years of using AI for research, I can say these tools have revolutionized my workflow.
So here are 12 prompts across ChatGPT, Claude, and Perplexity that transformed my research (and could do the same for you):
1. Literature Gap Finder
"I'm researching [topic]. Analyze current research trends and identify 5 unexplored angles or gaps that could lead to novel contributions."
This finds white space in saturated fields.
2. Research Question Generator
"Based on [topic/field], generate 10 research questions ranging from fundamental to cutting-edge. For each, rate feasibility (1-10) and potential impact (1-10)."