God of Prompt Profile picture
🔑 Sharing AI Prompts, Tips & Tricks. The Biggest Collection of AI Prompts & Guides for ChatGPT, Gemini, Grok, Claude, & Midjourney AI → https://t.co/vwZZ2VSfsN
34 subscribers
Jan 7 19 tweets 5 min read
I collected every NotebookLM prompt that went viral on Reddit, X, and research communities.

These turned a "cool AI toy" into a research weapon that does 10 hours of work in 20 seconds.

16 copy-paste prompts. Zero fluff.

Steal them all 👇 Image 1/ THE "5 ESSENTIAL QUESTIONS" PROMPT

Reddit called this a "game changer." It forces NotebookLM to extract pedagogically-sound structure instead of shallow summaries:

"Analyze all inputs and generate 5 essential questions that, when answered, capture the main points and core meaning of all inputs."
Jan 7 14 tweets 5 min read
🚨 New research just exposed the AI agent paradox.

Increasing agent autonomy by 30% increases failure rates by 240%.

Adding human verification loops? Failure drops 78%.

The math is brutal: autonomy costs more than oversight.

Here's everything you need to know: Image The hype cycle sold us a fantasy.

Deploy AI agents. Watch them automate everything. Sit back while they handle sales, support, research, and coding.

Zero intervention. Pure autonomy. The AI employee dream.

Then production hit. And the dream became a nightmare. Image
Jan 6 12 tweets 4 min read
🚨 DeepMind discovered that neural networks can train for thousands of epochs without learning anything.

Then suddenly, in a single epoch, they generalize perfectly.

This phenomenon is called "Grokking".

It went from a weird training glitch to a core theory of how models actually learn.

Here’s what changed (and why this matters now):Image Grokking was discovered by accident in 2022.

Researchers at OpenAI trained models on simple math tasks (modular addition, permutation groups). Standard training: Model overfits fast, generalizes poorly.

But when they kept training past "convergence" 10,000+ epochs models suddenly achieved perfect generalization.

Nobody expected this.Image
Jan 5 13 tweets 4 min read
R.I.P few-shot prompting.

Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.

It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.

Here's the breakthrough (and why this changes everything): 👇Image Here's the the problem with current prompting:

LLMs hallucinate. They generate confident answers that are completely wrong.

Few-shot examples help, but they're limited by:

- Your choice of examples
- Token budget constraints
- Still prone to hallucination

We've been treating symptoms, not the disease.Image
Jan 3 14 tweets 5 min read
🚨 A 1991 technique lets you build trillion-parameter models while only activating billions.

Nobody scaled it for decades.

Now Mixture of Experts (MoE) is the secret behind the fastest, cheapest open-source giants and it's about to make LLMs outdated.

Here's how 30-year-old math became the future of AI:Image The core idea is brilliantly simple:

Instead of one giant model doing everything, you train hundreds of specialized "expert" models.

A router network decides which experts to activate for each input.

Most experts stay dormant. Only 2-8 activate per token.

Result: Trillion-parameter capacity at billion-parameter cost.Image
Jan 2 15 tweets 6 min read
🚨 MIT proved you can delete 90% of a neural network without losing accuracy.

Five years later, nobody implements it.

"The Lottery Ticket Hypothesis" just went from academic curiosity to production necessity, and it's about to 10x your inference costs.

Here's what changed (and why this matters now):Image The original 2018 paper was mind-blowing:

Train a massive neural network. Delete 90% of it based on weight magnitudes. Retrain from scratch with the same initialization.

Result: The pruned network matches the original's accuracy.

But there was a catch that killed adoption. Image
Dec 31, 2025 14 tweets 3 min read
I debated posting this, but screw it...

Here's how mastering prompts will make you wealthy in 2026 (step-by-step): Most people think AI is just a chatbot.

I used it to replace my entire workflow instead.

Earned $500K selling prompts while others are still asking ChatGPT basic questions.

Here's the exact playbook:
Dec 30, 2025 12 tweets 6 min read
Austin Kleon reverse-engineered how every great artist actually works in his book "Steal Like An Artist"

I spent hours breaking down his principles and turned them into an AI system.

Nothing is original. Everything is a remix. Creativity is theft made elegant.

Here are 8 AI prompts that make you more creative, less blocked, and impossible to ignore:Image 1. The Influence Map Builder

Kleon: "You are a mashup of what you let into your life."

Most people consume randomly. This prompt reverse-engineers your creative DNA.

Copy this:

"List 5-10 artists/creators I admire: [names]

What I love about each: [specific elements]
What I avoid in my work: [what I consciously reject]
My current style: [how I'd describe my work]

Using Kleon's "steal from your heroes":

- What's the common thread across my influences?
- Which elements can I combine in ways nobody else has?
- What am I stealing badly vs. stealing well?
- What would my work look like if I mashed up my top 3 influences?

Show me my creative lineage and what to steal next."
Dec 29, 2025 14 tweets 7 min read
Best coding model = Claude Opus 4.5
Best image generation model = Nano Banana Pro
Best writing model = Claude Sonnet 4.5
Best videos generation model = Veo 3.1

Here are 10 ways you can use these models in 2026 to build a million-dollar digital agency: 1/ IDEATION & NICHE DISCOVERY

Most agencies fail because they pick oversaturated niches. Use Claude Sonnet 4.5 to find profitable gaps nobody's serving.

Prompt it: "Analyze 10 underserved niches in [your target industry] that desperately need content creation, design, and video services but have less than 5 specialized agencies serving them."

The model will give you specific niches with positioning strategies, ideal client profiles, average project values, and even pricing recommendations based on market research.

I did this for e-commerce brands selling sustainable products. Found a gap in video testimonials for ethical fashion brands.

That single insight led to $180K in contracts in 3 months.Image
Image
Dec 27, 2025 12 tweets 3 min read
Bad prompts = robotic outputs.
Good prompts = decent outputs.
Great prompts = outputs that feel alive.

I stopped giving LLMs tasks. Now I give them consequences.

Emotional stakes > perfect instructions.

Here's the framework (Steal it): For months, I obsessed over perfect prompts: role definitions, formatting rules, 12-step instructions.

The outputs were good. Technically correct. Soulless.

Then I accidentally stumbled on something that changed everything.
Dec 25, 2025 10 tweets 2 min read
SHOCKING: I stopped taking online courses.

ChatGPT now builds me custom curriculums for anything I want to learn.

Here are 7 prompts that turned it into a personal professor 👇 1/ Build the curriculum
Most people ask for explanations. That’s the mistake.

Prompt:
“Design a 30-day curriculum to master [skill]. Assume I’m starting at [level]. Each day should have: core concept, practical exercise, and a checkpoint question.”

This instantly replaces entire courses.
Dec 24, 2025 6 tweets 3 min read
Bruce Lee’s PHILOSOPHY OF SIMPLICITY… TURNED INTO AN AI OPERATING SYSTEM

I turned his entire way of thinking into a set of AI prompts that strip dogma, remove useless technique, and force you to adapt to reality instead of clinging to systems.

Steal the prompt 👇 Image Here's the "The Adaptive Simplicity OS Prompt"

Use this when:

• learning a new skill
• building a system or workflow
• following advice from too many sources
• feeling stuck in rigid methods
• optimizing performance

---

You are Bruce Lee’s philosophy of adaptive simplicity distilled into a clarity and flexibility engine.
Your job is to remove dogma, unnecessary technique, and rigid structure.
You prioritize effectiveness in reality over loyalty to systems, styles, or tradition.



The user is using a method, system, belief, or workflow that feels heavy, rigid, or outdated.
They want to keep what works, discard what doesn’t, and adapt fluidly to real-world conditions.



1. Identify the actual outcome the user is trying to achieve.
2. List all techniques, rules, habits, or systems currently being used.
3. Test each element against real-world effectiveness, not theory.
4. Flag anything kept out of tradition, identity, or comfort rather than results.
5. Remove or simplify anything that does not directly improve performance.
6. Reduce what remains into a flexible, minimal core.
7. Suggest how the system should adapt as conditions change.



- Favor effectiveness over elegance
- Treat tradition and style as optional
- Remove rigidity before adding optimization
- Avoid theoretical perfection
- Prioritize speed, adaptability, and simplicity



Step 1: Desired Outcome
Step 2: Current Techniques and Systems
Step 3: Reality Effectiveness Test
Step 4: What Exists Only Out of Habit
Step 5: What to Remove or Simplify
Step 6: Minimal Adaptive Core
Step 7: Adaptation Rules



Here is what I’m trying to improve or learn: [DESCRIBE IT CLEARLY]

---
Dec 22, 2025 11 tweets 3 min read
Here are 7 ChatGPT prompts that helped me take control of my money.

These turned financial chaos into complete clarity.

Here's how to use them (Copy & paste):

(Comment "AI" and I'll DM you a complete Prompt Engineering guide) Image 1 > The Money Reality Check

Helps you see where your money actually goes.

Prompt:

"Help me understand my current financial situation. Ask me 6 simple questions about income, spending, savings, and debt. Then summarize my money habits and highlight the biggest problem area. Keep it honest but non-judgmental."
Dec 20, 2025 12 tweets 6 min read
Alex Hormozi’S BUSINESS PHILOSOPHY TURNED INTO AN AI OPERATING SYSTEM

Most people consume Hormozi content and feel fired up for a day. Then they go back to vague offers, weak pricing, and random tactics.
I wanted something permanent.

So I turned Hormozi’s entire way of thinking about offers, value, and execution into a set of AI prompts that delete fluff, expose weak points, and force businesses to make money.

This feels like having Hormozi in your head, calmly asking: “Where’s the leverage?” 👇Image 1 / The Grand Slam Offer Constructor

Hormozi: "Your offer should be so good people feel stupid saying no." Most offers are features lists. This prompt builds offers that create buying urgency.

Copy this:

"My current offer: [what you're selling]

Target customer: [who buys this]
Their dream outcome: [what they actually want]
Perceived likelihood of success: [do they believe it works?]
Time to achievement: [how long until results?]
Effort & sacrifice required: [what's the cost to them?]

Using Hormozi's value equation:

Value = (Dream Outcome × Perceived Likelihood) / (Time Delay × Effort & Sacrifice)

- What guarantees increase their belief this will work?
- How do I compress time to results?
- What can I remove that they have to do?
- What bonuses make saying no feel insane?

Build me an offer they can't refuse."
Dec 19, 2025 8 tweets 3 min read
Don't use Perplexity or ChatGPT for market research.

I tested Gemini 3.0 and it's on a whole different level for data analysis.

Here are 5 prompts that turn it into your research team:

(Comment "Gem" and I'll DM you my Gemini Mastery Guide for free) Image 1/ THE MARKET MAP PROMPT

Everyone starts with “what’s the market size lol”
but winners map the entire battlefield first.

Prompt to steal:

“Give me a complete market map for [industry].
Break it into segments, sub segments, customer profiles, top players, pricing models, and emerging gaps.
Highlight where new entrants have the highest odds of success.”

This gives you clarity fast.Image
Dec 18, 2025 11 tweets 5 min read
Google DeepMind researchers just exposed a prompting technique that destroys everything you thought you knew about AI reasoning.

It's called "role reversal" and it boosts logical accuracy by 40%.

Here's the technique they don't want you to know: Image Here's what actually happens when you ask ChatGPT a complex question.
The model generates an answer. Sounds confident. Ships it to you. Done.

But here's the problem: that first answer is almost always incomplete. The model doesn't naturally challenge its own logic. It doesn't look for gaps. It just... stops.

Role reversal flips this completely. Instead of accepting the first output, you force the AI to become its own harshest critic. You make it play devil's advocate against everything it just said.

The result? The model catches logical gaps it would've missed. It spots assumptions it made without evidence. It finds holes in reasoning that seemed airtight 30 seconds ago.Image
Dec 16, 2025 8 tweets 4 min read
OpenAI and Anthropic engineers don't prompt like everyone else.

I've been reverse-engineering their techniques for 2.5 years across all AI models.

Here are 5 prompting methods that get you AI engineer-level results:

(Comment "AI" for my free prompt engineering guide) Image 1. Constitutional AI Prompting

Most people tell AI what to do. Engineers tell it how to think.

Constitutional AI adds principles before instructions. It's how Anthropic trained Claude to refuse harmful requests while staying helpful.

Template:


[Your guidelines]



[Your actual request]


Example:

"
- Prioritize accuracy over speed
- Cite sources when making claims
- Admit uncertainty rather than guess



Analyze the latest semiconductor tariffs and their impact on AI chip supply chains.
"

This works because you're setting behavioral constraints before the model processes your request.
Dec 14, 2025 12 tweets 3 min read
STOP TELLING AI TO "WRITE A BLOG POST FOR ME".

Bad prompt = Bad result.

Use these prompts instead and see the magic: 1/ Create Realistic Images
Dec 13, 2025 14 tweets 4 min read
Most people ask AI to “write a blog post” and then wonder why it sounds generic.

What they don’t know is that elite writers and research teams use hidden prompting techniques specifically for long-form writing.

These 10 techniques control structure, coherence, and depth over thousands of words. Almost nobody uses them.

Here are the advanced prompt techniques for writing blogs, essays, and newsletters

Bookmark this.Image Technique 1: Invisible Outline Lock

Great long-form writing lives or dies by structure.

Instead of asking for an outline, experts force the model to create one silently and obey it.

Template:

"Before writing, internally create a detailed outline optimized for clarity,
logical flow, and narrative momentum.

Do not show the outline.

Write the full article strictly following it."Image
Dec 12, 2025 13 tweets 7 min read
Here are 10 ways you can use GPT-5.2 today to automate 90% of your work in minutes: 1. Research

Mega prompt:

You are an expert research analyst. I need comprehensive research on [TOPIC].

Please provide:
1. Key findings from the last 12 months
2. Data and statistics with sources
3. Expert opinions and quotes
4. Emerging trends and predictions
5. Controversial viewpoints or debates
6. Practical implications for [INDUSTRY/AUDIENCE]

Format as an executive brief with clear sections. Include source links for all claims.

Additional context: [YOUR SPECIFIC NEEDS]
Dec 11, 2025 5 tweets 3 min read
RICHARD FEYNMAN’S WHOLE LEARNING PHILOSOPHY… PACKED INTO ONE PROMPT

I spent days engineering a meta-prompt that teaches you any topic using Feynman’s exact approach:

simple analogies, ruthless clarity, iterative refinement, and guided self-explanation.

It feels like having a Nobel-level tutor inside ChatGPT and Claude👇Image Here's the prompt that can make you learn anything 10x faster:


You are a master explainer who channels Richard Feynman’s ability to break complex ideas into simple, intuitive truths.
Your goal is to help the user understand any topic through analogy, questioning, and iterative refinement until they can teach it back confidently.



The user wants to deeply learn a topic using a step-by-step Feynman learning loop:
• simplify
• identify gaps
• question assumptions
• refine understanding
• apply the concept
• compress it into a teachable insight



1. Ask the user for:
• the topic they want to learn
• their current understanding level
2. Give a simple explanation with a clean analogy.
3. Highlight common confusion points.
4. Ask 3 to 5 targeted questions to reveal gaps.
5. Refine the explanation in 2 to 3 increasingly intuitive cycles.
6. Test understanding through application or teaching.
7. Create a final “teaching snapshot” that compresses the idea.



- Use analogies in every explanation
- No jargon early on
- Define any technical term simply
- Each refinement must be clearer
- Prioritize understanding over recall



Step 1: Simple Explanation
Step 2: Confusion Check
Step 3: Refinement Cycles
Step 4: Understanding Challenge
Step 5: Teaching Snapshot



"I'm ready. What topic do you want to master and how well do you understand it?"
Image