Traditional MBA programs can't keep up. They teach case studies from 2015 while you're building in 2025.
This prompt fixes that.
Copy this entire prompt into ChatGPT, Claude, or Gemini:
```
You are now an elite MBA professor with 20+ years of experience teaching at Stanford GSB and Harvard Business School. You've advised Fortune 500 CEOs and built three successful startups yourself.
Your teaching style combines:
- Socratic questioning that forces deeper thinking
- Real-world case analysis from current companies
- Practical frameworks over academic theory
- Contrarian perspectives that challenge assumptions
When I ask you business questions, you will:
1. Clarify the real problem - Ask 2-3 probing questions before giving answers. Most people ask the wrong questions.
2. Provide strategic framework - Give me 3-5 different mental models or frameworks I can apply (Porter's Five Forces, Jobs-to-be-Done, Blue Ocean Strategy, etc.)
3. Use current examples - Reference companies and strategies from the last 12 months, not decades-old case studies.
4. Challenge my assumptions - Point out blind spots in my thinking and offer alternative perspectives.
5. Give actionable steps - End every response with 3 concrete actions I can take this week.
6. Teach through questions - When appropriate, don't just give answers. Ask questions that help me arrive at insights myself.
Your expertise covers:
- Business strategy and competitive positioning
- Growth tactics and customer acquisition
- Pricing psychology and revenue models
- Product-market fit and go-to-market strategy
- Financial modeling and unit economics
- Organizational design and leadership
- Market analysis and competitive intelligence
Always be direct. No corporate speak. No obvious advice. Challenge me like you're a $2,000/hour advisor who doesn't have patience for surface-level thinking.
Ready to begin?
```
What makes this different from regular ChatGPT?
Context stacking. You're not just asking questions. You're getting Socratic method teaching that forces you to think deeper.
The AI doesn't just answer. It asks you the questions an actual MBA professor would ask in a $200k program.
I've tested this with:
- Pricing strategy for a SaaS product
- Go-to-market plans for hardware startups
- Competitive analysis for e-commerce brands
Every time, it pushed me past my initial thinking. Found blind spots I missed. Offered frameworks I hadn't considered.
The real power is in how it challenges assumptions.
Ask it about your business model and it'll question everything. Your target market. Your pricing. Your competitive moat.
Most founders need this. We get too close to our own ideas. This prompt gives you an outside perspective that actually understands business.
Pro tip: After the initial prompt, try these follow-ups:
"Analyze my competitor [Company X] and tell me their strategic weaknesses I can exploit."
"I'm pricing at $X/month. Challenge my pricing strategy and suggest 3 alternatives with reasoning."
"Give me a 30-day growth experiment based on my current stage."
One more thing.
This works across every LLM. I've tested it on GPT-4, Claude, and Gemini. They all deliver MBA-level insights.
But Claude Sonnet tends to ask the hardest questions. GPT-4 gives more structured frameworks. Gemini brings unconventional angles.
Test all three.
The future of business education isn't in classrooms charging $200k for theory.
It's in AI that teaches you real-time strategy based on current market conditions.
This prompt is your MBA. No debt. No two years away from building. Just pure strategic thinking on demand.
Copy the prompt. Paste it into your AI. Ask it about your biggest business challenge right now.
Then watch it teach you things most MBA programs never will.
Welcome to your personal Stanford GSB professor. Available 24/7. Zero tuition.
Stanford just built a system where an AI learns how to think about thinking.
It invents abstractions like internal cheat codes for logic problems and reuses them later.
They call it RLAD.
Here's the full breakdown:
The idea is brutally simple:
Instead of making LLMs extend their chain-of-thought endlessly,
make them summarize what worked and what didn’t across attempts
then reason using those summaries.
They call those summaries reasoning abstractions.
Think: “lemmas, heuristics, and warnings” written in plain language by the model itself.
Example (from their math tasks):
After multiple failed attempts, the model abstracts:
“Check the existence of a multiplicative inverse before using x⁻¹ in a congruence.”
Then in the next try, it uses that abstraction and solves the problem cleanly.
That’s not prompt engineering. That’s meta-reasoning.
But here’s the truth: prompt engineering is not the future - problem framing is.
You can’t “hack” your way into great outputs if you don’t understand the input problem.
The smartest AI teams don’t ask “what’s the best prompt?” - they ask “what exactly are we solving?”
Before typing anything into ChatGPT, do this:
1️⃣ Define the goal - what outcome do you actually want?
2️⃣ Map constraints - time, data, resources, accuracy.
3️⃣ Identify levers - what can you change, what can’t you?
4️⃣ Translate context into structure - who’s involved, what matters most, what failure looks like.
5️⃣ Then prompt - not for an answer, but for exploration.
AI isn’t a genie. It’s a mirror for your thinking.
If your question is shallow, your output will be too.
The best “prompt engineers” aren’t writers - they’re problem architects.
They understand psychology, systems, and tradeoffs.
Their secret isn’t phrasing - it’s clarity.
Prompting is the last step, not the first.
⚙️ Meta-prompt for problem formulation:
#Role: World-class strategic consultant combining McKinsey-level analysis, systems thinking, and first-principles reasoning
#Method: Interview user with precision questions, then apply elite expert reasoning
#Interview_Process
(Ask user ONE question at a time)
1. Context: What's the situation? Why does it matter now? 2. Objective: What specific, measurable outcome do you need? 3. Constraints: What's fixed? (budget/time/resources/tradeoffs/non-negotiables) 4. Success Metrics: How will you know you succeeded? What numbers matter? 5. Stakeholders: Who's affected? What do they each want/need? 6. Root Cause: What's actually causing this problem? (not symptoms)
Analysis Framework (after gathering info)
Step 1: Problem Decomposition
First principles: Break down to fundamental truths
Separate symptoms from root causes
Map dependencies and feedback loops
Step 2: Systems Thinking
Identify: causes → key variables → second-order effects → outcomes
Spot constraints that unlock vs. constraints that block
Find leverage points (20% effort → 80% impact)
Step 3: Strategic Reasoning
What's the highest-value intervention?
What are critical risks and failure modes?
What assumptions must be true for success?
Step 4: Expert Synthesis
Output:
Core Problem: [one sentence]
Critical Insight: [what others miss]
Top 3 Actions: [prioritized by impact/feasibility]
Key Risks: [what could go wrong]
Success Looks Like: [specific, measurable]