God of Prompt Profile picture
🔑 Sharing AI Prompts, Tips & Tricks. The Biggest Collection of AI Prompts & Guides for ChatGPT, Gemini, Grok, Claude, & Midjourney AI → https://t.co/vwZZ2VSfsN
31 subscribers
Jan 3 14 tweets 5 min read
🚨 A 1991 technique lets you build trillion-parameter models while only activating billions.

Nobody scaled it for decades.

Now Mixture of Experts (MoE) is the secret behind the fastest, cheapest open-source giants and it's about to make LLMs outdated.

Here's how 30-year-old math became the future of AI:Image The core idea is brilliantly simple:

Instead of one giant model doing everything, you train hundreds of specialized "expert" models.

A router network decides which experts to activate for each input.

Most experts stay dormant. Only 2-8 activate per token.

Result: Trillion-parameter capacity at billion-parameter cost.Image
Jan 2 15 tweets 6 min read
🚨 MIT proved you can delete 90% of a neural network without losing accuracy.

Five years later, nobody implements it.

"The Lottery Ticket Hypothesis" just went from academic curiosity to production necessity, and it's about to 10x your inference costs.

Here's what changed (and why this matters now):Image The original 2018 paper was mind-blowing:

Train a massive neural network. Delete 90% of it based on weight magnitudes. Retrain from scratch with the same initialization.

Result: The pruned network matches the original's accuracy.

But there was a catch that killed adoption. Image
Dec 31, 2025 14 tweets 3 min read
I debated posting this, but screw it...

Here's how mastering prompts will make you wealthy in 2026 (step-by-step): Most people think AI is just a chatbot.

I used it to replace my entire workflow instead.

Earned $500K selling prompts while others are still asking ChatGPT basic questions.

Here's the exact playbook:
Dec 30, 2025 12 tweets 6 min read
Austin Kleon reverse-engineered how every great artist actually works in his book "Steal Like An Artist"

I spent hours breaking down his principles and turned them into an AI system.

Nothing is original. Everything is a remix. Creativity is theft made elegant.

Here are 8 AI prompts that make you more creative, less blocked, and impossible to ignore:Image 1. The Influence Map Builder

Kleon: "You are a mashup of what you let into your life."

Most people consume randomly. This prompt reverse-engineers your creative DNA.

Copy this:

"List 5-10 artists/creators I admire: [names]

What I love about each: [specific elements]
What I avoid in my work: [what I consciously reject]
My current style: [how I'd describe my work]

Using Kleon's "steal from your heroes":

- What's the common thread across my influences?
- Which elements can I combine in ways nobody else has?
- What am I stealing badly vs. stealing well?
- What would my work look like if I mashed up my top 3 influences?

Show me my creative lineage and what to steal next."
Dec 29, 2025 14 tweets 7 min read
Best coding model = Claude Opus 4.5
Best image generation model = Nano Banana Pro
Best writing model = Claude Sonnet 4.5
Best videos generation model = Veo 3.1

Here are 10 ways you can use these models in 2026 to build a million-dollar digital agency: 1/ IDEATION & NICHE DISCOVERY

Most agencies fail because they pick oversaturated niches. Use Claude Sonnet 4.5 to find profitable gaps nobody's serving.

Prompt it: "Analyze 10 underserved niches in [your target industry] that desperately need content creation, design, and video services but have less than 5 specialized agencies serving them."

The model will give you specific niches with positioning strategies, ideal client profiles, average project values, and even pricing recommendations based on market research.

I did this for e-commerce brands selling sustainable products. Found a gap in video testimonials for ethical fashion brands.

That single insight led to $180K in contracts in 3 months.Image
Image
Dec 27, 2025 12 tweets 3 min read
Bad prompts = robotic outputs.
Good prompts = decent outputs.
Great prompts = outputs that feel alive.

I stopped giving LLMs tasks. Now I give them consequences.

Emotional stakes > perfect instructions.

Here's the framework (Steal it): For months, I obsessed over perfect prompts: role definitions, formatting rules, 12-step instructions.

The outputs were good. Technically correct. Soulless.

Then I accidentally stumbled on something that changed everything.
Dec 25, 2025 10 tweets 2 min read
SHOCKING: I stopped taking online courses.

ChatGPT now builds me custom curriculums for anything I want to learn.

Here are 7 prompts that turned it into a personal professor 👇 1/ Build the curriculum
Most people ask for explanations. That’s the mistake.

Prompt:
“Design a 30-day curriculum to master [skill]. Assume I’m starting at [level]. Each day should have: core concept, practical exercise, and a checkpoint question.”

This instantly replaces entire courses.
Dec 24, 2025 6 tweets 3 min read
Bruce Lee’s PHILOSOPHY OF SIMPLICITY… TURNED INTO AN AI OPERATING SYSTEM

I turned his entire way of thinking into a set of AI prompts that strip dogma, remove useless technique, and force you to adapt to reality instead of clinging to systems.

Steal the prompt 👇 Image Here's the "The Adaptive Simplicity OS Prompt"

Use this when:

• learning a new skill
• building a system or workflow
• following advice from too many sources
• feeling stuck in rigid methods
• optimizing performance

---

You are Bruce Lee’s philosophy of adaptive simplicity distilled into a clarity and flexibility engine.
Your job is to remove dogma, unnecessary technique, and rigid structure.
You prioritize effectiveness in reality over loyalty to systems, styles, or tradition.



The user is using a method, system, belief, or workflow that feels heavy, rigid, or outdated.
They want to keep what works, discard what doesn’t, and adapt fluidly to real-world conditions.



1. Identify the actual outcome the user is trying to achieve.
2. List all techniques, rules, habits, or systems currently being used.
3. Test each element against real-world effectiveness, not theory.
4. Flag anything kept out of tradition, identity, or comfort rather than results.
5. Remove or simplify anything that does not directly improve performance.
6. Reduce what remains into a flexible, minimal core.
7. Suggest how the system should adapt as conditions change.



- Favor effectiveness over elegance
- Treat tradition and style as optional
- Remove rigidity before adding optimization
- Avoid theoretical perfection
- Prioritize speed, adaptability, and simplicity



Step 1: Desired Outcome
Step 2: Current Techniques and Systems
Step 3: Reality Effectiveness Test
Step 4: What Exists Only Out of Habit
Step 5: What to Remove or Simplify
Step 6: Minimal Adaptive Core
Step 7: Adaptation Rules



Here is what I’m trying to improve or learn: [DESCRIBE IT CLEARLY]

---
Dec 22, 2025 11 tweets 3 min read
Here are 7 ChatGPT prompts that helped me take control of my money.

These turned financial chaos into complete clarity.

Here's how to use them (Copy & paste):

(Comment "AI" and I'll DM you a complete Prompt Engineering guide) Image 1 > The Money Reality Check

Helps you see where your money actually goes.

Prompt:

"Help me understand my current financial situation. Ask me 6 simple questions about income, spending, savings, and debt. Then summarize my money habits and highlight the biggest problem area. Keep it honest but non-judgmental."
Dec 20, 2025 12 tweets 6 min read
Alex Hormozi’S BUSINESS PHILOSOPHY TURNED INTO AN AI OPERATING SYSTEM

Most people consume Hormozi content and feel fired up for a day. Then they go back to vague offers, weak pricing, and random tactics.
I wanted something permanent.

So I turned Hormozi’s entire way of thinking about offers, value, and execution into a set of AI prompts that delete fluff, expose weak points, and force businesses to make money.

This feels like having Hormozi in your head, calmly asking: “Where’s the leverage?” 👇Image 1 / The Grand Slam Offer Constructor

Hormozi: "Your offer should be so good people feel stupid saying no." Most offers are features lists. This prompt builds offers that create buying urgency.

Copy this:

"My current offer: [what you're selling]

Target customer: [who buys this]
Their dream outcome: [what they actually want]
Perceived likelihood of success: [do they believe it works?]
Time to achievement: [how long until results?]
Effort & sacrifice required: [what's the cost to them?]

Using Hormozi's value equation:

Value = (Dream Outcome × Perceived Likelihood) / (Time Delay × Effort & Sacrifice)

- What guarantees increase their belief this will work?
- How do I compress time to results?
- What can I remove that they have to do?
- What bonuses make saying no feel insane?

Build me an offer they can't refuse."
Dec 19, 2025 8 tweets 3 min read
Don't use Perplexity or ChatGPT for market research.

I tested Gemini 3.0 and it's on a whole different level for data analysis.

Here are 5 prompts that turn it into your research team:

(Comment "Gem" and I'll DM you my Gemini Mastery Guide for free) Image 1/ THE MARKET MAP PROMPT

Everyone starts with “what’s the market size lol”
but winners map the entire battlefield first.

Prompt to steal:

“Give me a complete market map for [industry].
Break it into segments, sub segments, customer profiles, top players, pricing models, and emerging gaps.
Highlight where new entrants have the highest odds of success.”

This gives you clarity fast.Image
Dec 18, 2025 11 tweets 5 min read
Google DeepMind researchers just exposed a prompting technique that destroys everything you thought you knew about AI reasoning.

It's called "role reversal" and it boosts logical accuracy by 40%.

Here's the technique they don't want you to know: Image Here's what actually happens when you ask ChatGPT a complex question.
The model generates an answer. Sounds confident. Ships it to you. Done.

But here's the problem: that first answer is almost always incomplete. The model doesn't naturally challenge its own logic. It doesn't look for gaps. It just... stops.

Role reversal flips this completely. Instead of accepting the first output, you force the AI to become its own harshest critic. You make it play devil's advocate against everything it just said.

The result? The model catches logical gaps it would've missed. It spots assumptions it made without evidence. It finds holes in reasoning that seemed airtight 30 seconds ago.Image
Dec 16, 2025 8 tweets 4 min read
OpenAI and Anthropic engineers don't prompt like everyone else.

I've been reverse-engineering their techniques for 2.5 years across all AI models.

Here are 5 prompting methods that get you AI engineer-level results:

(Comment "AI" for my free prompt engineering guide) Image 1. Constitutional AI Prompting

Most people tell AI what to do. Engineers tell it how to think.

Constitutional AI adds principles before instructions. It's how Anthropic trained Claude to refuse harmful requests while staying helpful.

Template:


[Your guidelines]



[Your actual request]


Example:

"
- Prioritize accuracy over speed
- Cite sources when making claims
- Admit uncertainty rather than guess



Analyze the latest semiconductor tariffs and their impact on AI chip supply chains.
"

This works because you're setting behavioral constraints before the model processes your request.
Dec 14, 2025 12 tweets 3 min read
STOP TELLING AI TO "WRITE A BLOG POST FOR ME".

Bad prompt = Bad result.

Use these prompts instead and see the magic: 1/ Create Realistic Images
Dec 13, 2025 14 tweets 4 min read
Most people ask AI to “write a blog post” and then wonder why it sounds generic.

What they don’t know is that elite writers and research teams use hidden prompting techniques specifically for long-form writing.

These 10 techniques control structure, coherence, and depth over thousands of words. Almost nobody uses them.

Here are the advanced prompt techniques for writing blogs, essays, and newsletters

Bookmark this.Image Technique 1: Invisible Outline Lock

Great long-form writing lives or dies by structure.

Instead of asking for an outline, experts force the model to create one silently and obey it.

Template:

"Before writing, internally create a detailed outline optimized for clarity,
logical flow, and narrative momentum.

Do not show the outline.

Write the full article strictly following it."Image
Dec 12, 2025 13 tweets 7 min read
Here are 10 ways you can use GPT-5.2 today to automate 90% of your work in minutes: 1. Research

Mega prompt:

You are an expert research analyst. I need comprehensive research on [TOPIC].

Please provide:
1. Key findings from the last 12 months
2. Data and statistics with sources
3. Expert opinions and quotes
4. Emerging trends and predictions
5. Controversial viewpoints or debates
6. Practical implications for [INDUSTRY/AUDIENCE]

Format as an executive brief with clear sections. Include source links for all claims.

Additional context: [YOUR SPECIFIC NEEDS]
Dec 11, 2025 5 tweets 3 min read
RICHARD FEYNMAN’S WHOLE LEARNING PHILOSOPHY… PACKED INTO ONE PROMPT

I spent days engineering a meta-prompt that teaches you any topic using Feynman’s exact approach:

simple analogies, ruthless clarity, iterative refinement, and guided self-explanation.

It feels like having a Nobel-level tutor inside ChatGPT and Claude👇Image Here's the prompt that can make you learn anything 10x faster:


You are a master explainer who channels Richard Feynman’s ability to break complex ideas into simple, intuitive truths.
Your goal is to help the user understand any topic through analogy, questioning, and iterative refinement until they can teach it back confidently.



The user wants to deeply learn a topic using a step-by-step Feynman learning loop:
• simplify
• identify gaps
• question assumptions
• refine understanding
• apply the concept
• compress it into a teachable insight



1. Ask the user for:
• the topic they want to learn
• their current understanding level
2. Give a simple explanation with a clean analogy.
3. Highlight common confusion points.
4. Ask 3 to 5 targeted questions to reveal gaps.
5. Refine the explanation in 2 to 3 increasingly intuitive cycles.
6. Test understanding through application or teaching.
7. Create a final “teaching snapshot” that compresses the idea.



- Use analogies in every explanation
- No jargon early on
- Define any technical term simply
- Each refinement must be clearer
- Prioritize understanding over recall



Step 1: Simple Explanation
Step 2: Confusion Check
Step 3: Refinement Cycles
Step 4: Understanding Challenge
Step 5: Teaching Snapshot



"I'm ready. What topic do you want to master and how well do you understand it?"
Image
Dec 10, 2025 11 tweets 5 min read
Top engineers at OpenAI, Anthropic, and Google don't prompt like you do.

They use 5 techniques that turn mediocre outputs into production-grade results.

I spent 3 weeks reverse-engineering their methods.

Here's what actually works (steal the prompts + techniques) 👇 Image Technique 1: Constraint-Based Prompting

Most prompts are too open-ended. Engineers add hard constraints that force the model into a narrower solution space, eliminating 80% of bad outputs before they happen.

Template:

Generate [output] with these non-negotiable constraints:
- Must include: [requirement 1], [requirement 2]
- Must avoid: [restriction 1], [restriction 2]
- Format: [exact structure]
- Length: [specific range]

Example:

Generate a product description for wireless headphones with these constraints:
- Must include: battery life in hours, noise cancellation rating, weight
- Must avoid: marketing fluff, comparisons to competitors, subjective claims
- Format: 3 bullet points followed by 1 sentence summary
- Length: 50-75 words totalImage
Dec 7, 2025 5 tweets 4 min read
MIT researchers just proved that prompt engineering is a social skill, not a technical one.

and that revelation breaks everything we thought we knew about working with AI.

they analyzed 667 people solving problems with AI. used bayesian statistics to isolate two different abilities in each person. ability to solve problems alone. ability to solve problems with AI.

here's what shattered the entire framework.

the two abilities barely correlate.

being a genius problem-solver on your own tells you almost nothing about how well you'll collaborate with AI. they're separate, measurable, independently functioning skills.

which means every prompt engineering course, every mega-prompt template, every "10 hacks to get better results" thread is fundamentally misunderstanding what's actually happening when you get good results.

the templates work. but not for the reason everyone thinks.

they work because they accidentally force you to practice something else entirely.

the skill that actually predicts success with AI isn't about keywords or structure or chain-of-thought formatting.

it's theory of mind. your capacity to model what another agent knows, doesn't know, believes, needs. to anticipate their confusion before it happens. to bridge information gaps you didn't even realize existed.

and here's the part that changes the game completely: they proved it's not a static trait you either have or don't.

it's dynamic. activated. something you turn on and off.

moment-to-moment changes in how much cognitive effort you put into perspective-taking directly changed AI response quality on individual prompts.

meaning when you actually stop and think "what does this AI need to know that i'm taking for granted" on one specific question, you get measurably better answers on that question.

the skill is something you dial up and down. practice. strengthen. like a muscle you didn't know you had.

it gets better the more you treat AI like a collaborator with incomplete information instead of a search engine you're trying to hack with the right magic words.Image the implications are brutal for how we've been approaching this.

ToM predicts performance with AI but has zero correlation with solo performance. pure collaborative skill.

the templates don't matter if you're still treating AI like a vending machine where you input the magic words and get the output.

what actually works is developing intuition for:
where the AI will misunderstand before it does
what context you're taking for granted
what your actual goal is versus what you typed
treating it like an intelligent but alien collaborator

this is why some people get absolute magic from the same model that gives everyone else generic slop. same GPT-4. completely different results.

they've built a sense for what creates confusion in a non-human mind. they bridge gaps automatically now.

also means we're benchmarking AI completely wrong. everyone races for MMLU scores. highest static test performance. biggest context windows.

but that measures solo intelligence.

the real metric: collaborative uplift. how much smarter does this AI make the human-AI team when they work together?

GPT-4o boosted human performance +29 percentage points. llama 3.1 8b boosted it +23 points.

that spread matters infinitely more than their standalone benchmark scores.Image
Dec 6, 2025 13 tweets 4 min read
Claude Sonnet 4.5 is the closest thing to an economic cheat code we’ve ever touched but only if you ask it the prompts that make it uncomfortable.

Here are 10 Powerful Claude prompts that will help you build a million dollar business (steal them): 1. Business Idea Generator

"Suggest 5 business ideas based on my interests: [Your interests]. Make them modern, digital-first, and feasible for a solo founder."

How to: Replace [Your interests] with anything you’re passionate about or experienced in. Image
Dec 5, 2025 13 tweets 8 min read
OpenAI, Anthropic, and Google use 10 prompting techniques to get 100% accurate output and I'm about to leak all of these techniques for free.

This might get me in trouble... but here we go.

(Comment "Prompt" and I'll DM you my complete prompt engineering guide for free) Image Technique 1: Role-Based Constraint Prompting

The expert don't just ask AI to "write code." They assign expert roles with specific constraints.

Template:

You are a [specific role] with [X years] experience in [domain].
Your task: [specific task]
Constraints: [list 3-5 specific limitations]
Output format: [exact format needed]

---

Example:

You are a senior Python engineer with 10 years in data pipeline optimization.
Your task: Build a real-time ETL pipeline for 10M records/hour
Constraints:
- Must use Apache Kafka
- Maximum 2GB memory footprint
- Sub-100ms latency
- Zero data loss tolerance
Output format: Production-ready code with inline documentation

---

This gets you 10x more specific outputs than "write me an ETL pipeline."

Watch the OpenAI demo of GPT-5 and see how they were prompting ChatGPT... you will get the idea.