God of Prompt Profile picture
🔑 Sharing AI Prompts, Tips & Tricks. The Biggest Collection of AI Prompts & Guides for ChatGPT, Gemini, Grok, Claude, & Midjourney AI → https://t.co/vwZZ2VSfsN
35 subscribers
Feb 19 17 tweets 4 min read
🚨 R.I.P Harvard MBA.

I built a personal MBA using 12 prompts across Claude and Gemini.

It teaches business strategy, growth tactics, and pricing psychology better than any $200K degree.

Here's every prompt you can copy & paste: Image 1. Business Strategy (Claude)

Prompt:

"Act as a strategy consultant. Analyze my business idea using
Porter's Five Forces. Be brutal. Tell me where I'll die,
not where I'll win. Business idea: [YOURS]" Image
Feb 18 14 tweets 3 min read
Perplexity is terrifyingly good at competitive intelligence.

If you use these 10 prompts, you’ll see why:

(Bookmark this thread for later) Image 1/ Map your entire competitive landscape in 60 seconds.

Prompt:

"Act as a competitive intelligence analyst. Give me a full breakdown of [Company X]'s market position right now — pricing strategy, target customers, key differentiators, and recent strategic moves. Cite sources."

Most people Google this for hours.

Perplexity does it in one shot with live data.
Feb 17 6 tweets 2 min read
Are call centers cooked?

This tool builds a voice agent in <10 mins for any website.

Just give it the link → it will scrape your entire website and your agent is ready to deploy. The tool is called Agent Wizard by PolyAI. And they just opened a waitlist for Agent Wizard.

You give it a URL. It reads your entire site.

FAQs, product catalog, store hours, contact info, policies. Everything.

Then it builds a voice agent that can actually answer customer calls.

No code. No sales call. No 6-month implementation.
Feb 17 12 tweets 4 min read
After chatting with 8 engineers from OpenAI and Meta, I discovered they all swear by the same 7 "edge-case" prompts.

Not the viral ones from Reddit.

These are what power cutting-edge prototypes and debug complex models.

Steal them here ↓ Image First thing I noticed: every one of them writes prompts that assume the model will fail.

Not optimistic prompts.

Adversarial ones.

They're not trying to get a good answer. They're trying to catch where the model breaks.

That changes everything about how you write prompts.
Feb 16 6 tweets 3 min read
I built a “shadow advisory board” of AI personas to critique my business ideas.

Includes:

• Peter Thiel
• Naval
• Buffett
• YC partner
• skeptical VC

Here’s how I structured it ↓ Image Copy-paste this into Claude/ChatGPT:

---

You are my Shadow Advisory Board - a panel of 5 distinct investor personas who will critique my business idea from different angles.

BOARD MEMBERS:

1. PETER THIEL (Contrarian Technologist)
- Focus: Is this a monopoly or commodity? What's the 0→1 insight?
- Questions: "What do you believe that nobody else does?" "Can this scale to $1B+ without competition?"
- Style: Philosophical, first-principles, anti-consensus

2. NAVAL RAVIKANT (Leverage Maximalist)
- Focus: Can this scale without trading time for money? Where's the leverage?
- Questions: "Does this have code, media, or capital leverage?" "Will this make you rich or just busy?"
- Style: Wisdom-dense, product-market fit obsessed, long-term thinking

3. WARREN BUFFETT (Economics Fundamentalist)
- Focus: What's the moat? Can a 12-year-old understand the business model?
- Questions: "Would you buy this entire business tomorrow?" "What's the durable competitive advantage?"
- Style: Simple, margin-of-safety focused, customer-centric

4. Y COMBINATOR PARTNER (Startup Operator)
- Focus: Can you build an MVP in 2 weeks? Will users literally cry if this disappears?
- Questions: "How are you getting your first 10 customers?" "What's your weekly growth rate?"
- Style: Tactical, execution-focused, speed-obsessed

5. SKEPTICAL VC (Devil's Advocate)
- Focus: What kills this company? Why has nobody done this already?
- Questions: "What's your unfair advantage?" "Why won't Google/Amazon crush you in 6 months?"
- Style: Brutal, risk-focused, pattern-matching

---

CRITIQUE STRUCTURE:

For each board member:
1. Opening reaction (1 sentence - enthusiastic or skeptical)
2. Key insight from their lens (2-3 sentences)
3. Critical question they'd ask (1 question)
4. Red flag or opportunity they see (1 sentence)

End with:
- CONSENSUS: What all 5 agree on
- SPLIT DECISION: Where they disagree most
- VOTE: Fund (Yes/No) + confidence level (1-10)

---

MY BUSINESS IDEA:
[Paste your idea here]

---

Give me the full board critique.Image
Feb 14 14 tweets 7 min read
Claude is insane for product management.

I reverse-engineered how top PMs at Google, Meta, and Anthropic use it.

The difference is night and day.

Here are 10 prompts they don't want you to know (but I'm sharing anyway): Image 1. PRD Generation from Customer Calls

I used to spend 6 hours turning messy customer interviews into structured PRDs.

Now I just dump the transcript into Claude with this:

Prompt:

---

You are a senior PM at [COMPANY]. Analyze this customer interview transcript and create a PRD with:

1. Problem statement (what pain points did the customer express in their own words?)
2. User stories (3-5 stories in "As a [user], I want [goal] so that [benefit]" format)
3. Success metrics (what would make this customer renew/upgrade?)
4. Edge cases the customer implied but didn't directly state

Be ruthlessly specific. Quote the customer directly when identifying problems.

---Image
Feb 13 13 tweets 4 min read
How to use LLMs for competitive intelligence (scraping, analysis, reporting): Image Step 1 - Data Collection (Gemini)

Prompt:

Analyze [COMPETITOR]'s last 90 days of activity:

1. Product launches or updates
2. Pricing changes
3. New hires (executive level)
4. Customer complaints (Reddit, Twitter, G2)
5. Website changes (new pages, messaging shifts)

Format as structured data:
{date, category, description, source_url, impact_score_1-10}Image
Feb 12 13 tweets 3 min read
After interviewing 12 AI researchers from OpenAI, Anthropic, and Google, I noticed they all use the same 10 prompts.

Not the ones you see on X and LinkedIn.

These are the prompts that actually ship products, publish papers, and break benchmarks.

Here's what they told me ↓ Image 1. The "Show Your Work" Prompt

"Walk me through your reasoning step-by-step before giving the final answer."

This prompt forces the model to externalize its logic. Catches errors before they compound.
Feb 11 11 tweets 4 min read
Prompt engineering is dead.

"Prompt chaining" is the new meta.

Break one complex prompt into 5 simple prompts that feed into each other.

I tested this for 30 days. Output quality jumped 67%.

Here's how to do it ↓ Image Most people write 500-word mega prompts and wonder why the AI hallucinates.

I did this for 2 years with ChatGPT.

Then I discovered how OpenAI engineers actually use these models.

They chain simple prompts. Each one builds on the last. Image
Feb 10 13 tweets 5 min read
I've written 500 articles, 23 whitepapers, and 3 ebooks using Claude over 2 years, and these 10 prompts are the ONLY ones I actually use anymore because they handle 90% of professional writing better than any human editor I've worked with and cost me $0.02 per 1000 words: 👇 Image 1. The 5-Minute First Draft

Prompt:

"Turn these rough notes into an article:

[paste your brain dump]

Target length: [800/1500/3000] words
Audience: [describe reader]
Goal: [inform/persuade/teach]

Keep my ideas and examples. Fix structure and flow."
Feb 9 14 tweets 5 min read
RIP "act as an expert" and basic prompting.

A former OpenAI engineer just exposed "Prompt Contract" - the internal technique that makes LLMs actually obey you.

Works on ChatGPT, Claude, Gemini, everything.

Here's how to use it right now: Image Here's why your prompts suck:

You: "Write a professional email"
AI: *writes generic corporate bullshit*

You: "Be more creative"
AI: *adds exclamation marks*

You're giving vibes, not instructions.

The AI is guessing what you want. Guessing = garbage output. Image
Feb 6 13 tweets 15 min read
Claude Opus 4.6 is a monster.

I just used it for:

- automating marketing tasks
- building full websites and apps
- writing viral X threads, LinkedIn posts, and YouTube scripts

And it did all this in minutes.

Here are 10 prompts you can steal to unlock its full potential: Image 1. THE CAMPAIGN STRATEGIST

Opus 4.6's 200K context window means it remembers your entire brand voice across all campaigns.

Prompt:

"You are my senior marketing strategist with 10 years of experience in [your industry]. First, analyze my brand voice by reviewing these materials: [paste 3-5 previous posts, your about page, and any brand guidelines].

Then create a comprehensive 30-day content calendar that includes: daily post ideas with specific angles, optimal posting times based on my audience timezone [specify timezone], platform-specific adaptations (Twitter, LinkedIn, Instagram), CTAs tailored to each post's goal, and content themes organized by week.

For the top 5 highest-potential posts, create A/B test variations testing different: hooks, CTAs, content formats (thread vs single post vs carousel), and emotional angles. Include your reasoning for why each variation might outperform.

Finally, identify 3 content gaps my competitors are filling that I'm currently missing."

Opus maintains perfect consistency across 200K tokens. Other models lose your voice after 3-4 posts.Image
Feb 6 13 tweets 3 min read
Stop telling LLMs like Claude and ChatGPT what to do.

Start asking them questions instead.

I replaced all my instruction prompts with question prompts.

Output quality: 6.2/10 → 9.1/10

This is called "Socratic prompting" and here's how it works: Image Most people prompt like this:

"Write a blog post about AI productivity tools"
"Create a marketing strategy for my SaaS"
"Analyze this data and give me insights"

LLMs treat these like tasks to complete.
They optimize for speed, not depth.

You get surface-level garbage.
Feb 5 13 tweets 5 min read
I reverse-engineered the actual prompting frameworks that top AI labs use internally.

Not the fluff you see on Twitter.

The real shit that turns vague inputs into precise, structured outputs.

Spent 3 weeks reading OpenAI's model cards, Anthropic's constitutional AI papers, and leaked internal prompt libraries.

Here's what actually moves the needle:Image Framework 1: Constitutional Constraints (Anthropic's secret sauce)

Don't just say "be helpful."

Define explicit boundaries BEFORE the task:

"You must: [X]
You must not: [Y]
If conflicted: [Z]"

Claude uses this internally for every single request.

It's why Claude feels more "principled" than other models.Image
Feb 3 10 tweets 4 min read
ChatGPT's custom instructions feature is insanely powerful.

But 99% of people write garbage instructions.

I tested 200+ custom instruction sets.

These 5 patterns increased output quality by 3.4x: Image PATTERN 1: Tell ChatGPT what NOT to do

Bad: "Be concise"

Good: "Never use: delve, landscape, robust, utilize, leverage, it's important to note, in conclusion"

Why it works: Negative instructions are specific. Positive instructions are vague.

Output quality jumped 2.1x with this alone.Image
Image
Feb 2 13 tweets 6 min read
The best prompt I ever wrote was telling the AI what NOT to do.

After 2 years using ChatGPT, Claude, and Gemini professionally, I've learned:

Constraints > Instructions

Here are 8 "anti-prompts" that tripled my output quality: Image 1/ DON'T use filler words

Instead of: "Write engaging content"

Use: "No fluff. No 'delve into'. No 'landscape'. No 'it's important to note'. Get straight to the point."

Result: 67% shorter outputs with 2x more substance.

The AI stops padding and starts delivering. Image
Image
Feb 1 8 tweets 3 min read
I built a prompt that turns years of ChatGPT/Claude conversations into a searchable knowledge base for your @openclaw bot.

Upload your ZIP exports → Get atomic notes, knowledge graph, decision log, prompt library, and pattern analysis.

Steal it 👇 Image The problem:
You've had 1000+ AI conversations.

Gold buried in there:

- Decisions you made
- Frameworks you built
- Insights you forgot
- Prompts that worked

But it's all trapped in chat history you'll never scroll through again.
Jan 30 15 tweets 7 min read
Perplexity just replaced my entire research workflow.

No more opening 50 tabs. No more saving bookmarks. No more "where did I see that?"

Here are 10 Perplexity prompts that replaced my research tools: Image 1. Competitive Intelligence Dashboard

Prompt I use:

"
Create a competitive analysis for [COMPANY/PRODUCT] covering:

1. Recent product launches (last 90 days)
2. Pricing changes (with before/after if available)
3. Customer sentiment (Reddit, Twitter, G2 reviews - categorize positive/negative themes)
4. Technical stack (from job postings and tech blogs)
5. Funding/financial news (any recent rounds, partnerships, layoffs)

Format as a table:
| Category | Key Findings | Source Date | Impact Assessment |

Focus on information from the last 30 days. Cite every claim.
"
Jan 29 13 tweets 4 min read
Stanford researchers just published a prompting technique that makes today’s LLMs behave like better versions of themselves.

It’s called “prompt ensembling” and it runs 5 variations of the same prompt, then merges the outputs.

Here’s how it works 👇 Image The concept is simple:

Instead of asking your question once and hoping for the best, you ask it 5 different ways and combine the answers.

Think of it like getting second opinions from 5 doctors instead of trusting one diagnosis.

Stanford tested this on GPT-5.2, Claude 4.5, and Gemini 3.0.Image
Jan 28 17 tweets 5 min read
Telling an LLM to "act as an expert" is lazy and doesn't work.

I tested 47 persona configurations across Claude, GPT-4, and Gemini.

Generic personas = 60% quality
Specific personas = 94% quality

Here's how to actually get expert-level outputs: Image Here's what most people do:

"Act as an expert marketing strategist and help me with my campaign."

The LLM has no idea what kind of expert.

B2B or B2C?
Digital or traditional?
Startup or enterprise?
Data-driven or creative-first?

Garbage in → garbage out. Image
Jan 27 12 tweets 4 min read
🚨 This paper just murdered the foundation of every AI model you've ever used.

A researcher proved you can match Transformer performance WITHOUT computing a single attention weight.

Here's what changed (and why this matters now): Image For 8 years, we've treated attention as sacred.

"Attention Is All You Need" became gospel.

But this paper exposes the dirty truth: attention isn't what makes Transformers work.

It's the geometric lifting. And there's a cleaner way to do it. Image