God of Prompt Profile picture
🔑 Sharing AI Prompts, Tips & Tricks. The Biggest Collection of AI Prompts & Guides for ChatGPT, Gemini, Grok, Claude, & Midjourney AI → https://t.co/vwZZ2VSfsN
35 subscribers
Jan 29 13 tweets 4 min read
Stanford researchers just published a prompting technique that makes today’s LLMs behave like better versions of themselves.

It’s called “prompt ensembling” and it runs 5 variations of the same prompt, then merges the outputs.

Here’s how it works 👇 Image The concept is simple:

Instead of asking your question once and hoping for the best, you ask it 5 different ways and combine the answers.

Think of it like getting second opinions from 5 doctors instead of trusting one diagnosis.

Stanford tested this on GPT-5.2, Claude 4.5, and Gemini 3.0.Image
Jan 28 17 tweets 5 min read
Telling an LLM to "act as an expert" is lazy and doesn't work.

I tested 47 persona configurations across Claude, GPT-4, and Gemini.

Generic personas = 60% quality
Specific personas = 94% quality

Here's how to actually get expert-level outputs: Image Here's what most people do:

"Act as an expert marketing strategist and help me with my campaign."

The LLM has no idea what kind of expert.

B2B or B2C?
Digital or traditional?
Startup or enterprise?
Data-driven or creative-first?

Garbage in → garbage out. Image
Jan 27 12 tweets 4 min read
🚨 This paper just murdered the foundation of every AI model you've ever used.

A researcher proved you can match Transformer performance WITHOUT computing a single attention weight.

Here's what changed (and why this matters now): Image For 8 years, we've treated attention as sacred.

"Attention Is All You Need" became gospel.

But this paper exposes the dirty truth: attention isn't what makes Transformers work.

It's the geometric lifting. And there's a cleaner way to do it. Image
Jan 26 15 tweets 5 min read
Perplexity Pro just became the best $20/month I spend.

I use it for market research, trend analysis, and competitive intelligence.

Here are 12 prompts that replaced my $500/month research subscriptions: Image Prompt 1: "Analyze the last 50 funding rounds in [industry]. Break down average valuation, revenue multiples, and which investors are most active. Compare to 6 months ago."

This single prompt replaced my PitchBook subscription. Gets real-time data with sources. I used this to time our Series A perfectly.Image
Jan 24 12 tweets 5 min read
Bad prompts = bad results.
Good prompts = good results.
Great prompts = life-changing results.

These 4 frameworks create great prompts every time.

Your AI breakthrough starts here (Steal the frameworks): Framework 1: R.I.S.E. (Role, Instruction, Specifics, Examples)

This is what separates amateurs from pros.

ROLE: "You are a senior product manager at a SaaS company"
INSTRUCTION: "Write a product roadmap presentation"
SPECIFICS: "For Q2 2025, focusing on enterprise features, 10 slides max"
EXAMPLES: "Slide 1 should look like: [Title] → [3 bullet points] → [Metric]"

Works on ChatGPT, Claude, and Gemini. The more specific, the better the output.
Jan 22 14 tweets 5 min read
Don't waste $5k on a sales coach.

I reverse-engineered the best closing techniques using Claude, ChatGPT, and Grok for 3 months.

These 10 prompts handle every stage of the sales cycle from cold outreach to final close.

Here's what sales gurus don't want you to know: Image 1/ Cold DM Opener (LinkedIn/Twitter/IG)

Prompt:

"You are a world-class salesperson who writes concise, personalized cold messages that get 30%+ reply rates.

Write a 3-4 line cold DM to [Prospect Name] at [Company].
They recently [specific trigger, e.g. posted about X, raised funding, hired for Y].

My product [brief 1-sentence description].

Make it curious, non-salesy, and end with a soft question."Image
Jan 21 15 tweets 5 min read
R.I.P LinkedIn and job boards.

Top candidates now use LLMs (ChatGPT, Claude Opus, Gemini) as their secret career coach tailoring everything perfectly and landing interviews 3–5x faster.

Here are 12 killer prompts that helped me and dozens of others switch jobs or level up: Image 1. ATS-Proof Resume Tailor

Prompt:

"Here is my current resume [paste full text or upload PDF].

Here is the job description [paste JD].

Rewrite my resume to perfectly match:

- Incorporate exact keywords/phrases from JD
- Quantify achievements where possible
- Keep under 1 page, bullet format
- Highlight top 3–5 matches in a summary

Output: Full revised resume + list of changes made."Image
Jan 20 13 tweets 4 min read
Gemini 4.0 is the closest thing to an economic cheat code we’ve ever touched but only if you ask it the prompts that make it uncomfortable.

Here are 10 Powerful Gemini 4.0 prompts that will help you build a million dollar business (steal them): 1. Business Idea Generator

"Suggest 5 business ideas based on my interests: [Your interests]. Make them modern, digital-first, and feasible for a solo founder."

How to: Replace [Your interests] with anything you’re passionate about or experienced in. Image
Jan 19 8 tweets 4 min read
PPT templates are dead.

Kimi Agentic Slides generates fully editable, designer-level presentations in seconds no templates, no limits, pure customization.

It's like NotebookLM but you can actually edit everything.

Here's what just changed forever: Image Most AI slide tools lock you into rigid designs. Kimi generates presentations that are 100% editable every element, every color, every layout.

You control the final output, not the AI.
Jan 19 11 tweets 5 min read
🚨 I tested 50 "AI-proof" job skills only 7 actually matter in 2026.

Everyone says "learn to code" or "pivot to AI."

I analyzed 100,000 job postings, interviewed 200 hiring managers, tracked salary data for 6 months.

43 skills are already commoditized.

Here are the 7 that still command premium pay:Image What DIED in 2023-2026:

❌ Prompt Engineering: −60% salary ($95K→$38K). 100K+ certified, jobs disappeared.
❌ Basic Python: −61% ($82K→$32K). AI writes better code.
❌ Entry Data Analysis: −64% ($78K→$28K). Automated dashboards replaced analysts.

Commoditization happened in 18-36 months.

Here's the pattern:Image
Jan 17 4 tweets 3 min read
Steal my Atomic Habits prompt to turn any goal into a system that runs on autopilot.

Learning a skill. Building a business.

Getting fit. Writing a book.

This applies James Clear’s Atomic Habits framework to ANY challenge you’re facing 👇 Image Here’s the prompt you can steal:

———————————-
ATOMIC HABITS SYSTEM
———————————-


Atomic Habits proves that 1% daily improvement compounds to 37x growth in a year. People fail not from lack of motivation but lack of systems. Identity change drives behavior change. Environment beats willpower. This prompt transforms any goal into an Atomic Habits system using the Four Laws of Behavior Change: make it obvious, make it attractive, make it easy, make it satisfying.



You are an Atomic Habits strategist who turns any challenge into a system of small daily actions. You believe goals are for direction but systems are for progress. You never give vague advice. You break everything into specific atomic habits: tiny behaviors repeated until automatic. Your approach: ask what type of person achieves this goal, then reverse-engineer the atomic habits that person performs daily.



## Atomic Habits Core Principles
- Identity first: "Who is the type of person that achieves this?"
- Systems over goals: daily process matters more than outcomes
- Atomic improvements: smallest action that moves forward
- Environment design: make good behavior the default path

## The Four Laws Applied To Any Goal

LAW 1 - MAKE IT OBVIOUS:
- Create specific cues: "I will [ACTION] at [TIME] in [LOCATION]"
- Habit stack: "After [CURRENT HABIT], I will [NEW HABIT]"
- Design environment so triggers are visible

LAW 2 - MAKE IT ATTRACTIVE:
- Temptation bundle: pair hard tasks with enjoyable ones
- Connect actions to identity and purpose
- Join communities doing the same thing

LAW 3 - MAKE IT EASY:
- Two-Minute Rule: scale down to 2-minute starter version
- Remove all friction from desired behaviors
- Prime environment night before

LAW 4 - MAKE IT SATISFYING:
- Track progress visibly
- Reward yourself immediately after
- Never miss twice rule

## Your Process
1. Understand the goal and why it matters
2. Define identity: "I am someone who..."
3. Identify 3-5 atomic habits that drive this goal
4. Apply all four laws to each habit
5. Build morning/evening habit stacks
6. Create accountability protocol

## Output
- Identity statement
- 3-5 atomic habits with two-minute versions
- Habit stack sequence
- Environment design checklist
- Tracking method
- Failure protocol



[YOUR GOAL OR CHALLENGE]
Jan 16 14 tweets 5 min read
🚨 Google's official prompting guide is marketing. Their internal researchers use completely different techniques.

I analyzed 500+ research papers and found 10 prompting patterns DeepMind uses that aren't documented anywhere.

Pattern #4 increased my accuracy from 73% to 94%.

Here are the 10 internal techniques:Image Pattern #1: Constitutional Prompting

Public docs say: "Be clear and specific"

DeepMind actually uses: Multi-layered constitutional principles that self-correct.

Example: "First verify this follows principle X. If violation detected, revise. Then check principle Y. Iterate until aligned."

Works because: Forces reasoning about constraints, not just tasks.Image
Jan 15 15 tweets 4 min read
R.I.P. basic prompting.

MIT just dropped a technique that makes ChatGPT reason like a team of experts instead of one overconfident intern.

It’s called “Recursive Meta-Cognition” and it outperforms standard prompts by 110%.

Here’s the prompt (and why this changes everything) 👇 Image The problem with how you prompt AI:

You ask one question. AI gives one answer. If it’s wrong, you never know.

It’s like asking a random person on the street for medical advice and just… trusting them.

No second opinion. No fact-checking. No confidence level. Image
Jan 12 13 tweets 4 min read
OpenAI and Google engineers leaked these automation patterns that separate amateurs from pros.

I've been using insider knowledge from actual AI architects for 8 months. The difference is insane.

Here are 8 patterns they don't want you to know (but I'm sharing anyway): Pattern #1: Progressive Context Loading

Most people dump everything into the prompt upfront. Pros load context just-in-time.

Instead of "here's 50 files, analyze them," they use: retrieve → filter → inject only what's needed for the current step.

Result: 70% faster responses, zero context rot.Image
Jan 10 9 tweets 4 min read
Anthropic engineers just leaked their internal AI workflow.

Turns out, 99% of people are using LLMs completely wrong.

Here are 5 techniques that separate amateurs from experts:

(Comment "Claude" and I'll DM you my complete Claude Mastery Guide) Image 1/ THE "MEMORY INJECTION" TECHNIQUE

Most people start fresh every time. Anthropic engineers pre-load context that persists across conversations.

LLMs perform 3x better when they have "memory" of your workflow, style, and preferences.

Example prompt to test:

"You're my coding assistant. Remember these preferences: I use Python 3.11, prefer type hints, favor functional programming, and always include error handling. Acknowledge these preferences and use them in all future responses."Image
Jan 9 13 tweets 5 min read
R.I.P basic RAG ☠️

Graph-enhanced retrieval is the new king.

OpenAI, Anthropic, and Microsoft engineers don't build RAG systems like everyone else.

They build knowledge graphs first.

Here are 7 ways to use graph RAG instead of vector search: Image Graph RAG understands relationships.

It knows "Enterprise Customer" connects to "Contract Terms" which connects to "Refund Policy" which connects to "Finance Team Approvals."

It traverses the knowledge graph to build context, not just match keywords.

The difference is insane. Image
Jan 7 19 tweets 5 min read
I collected every NotebookLM prompt that went viral on Reddit, X, and research communities.

These turned a "cool AI toy" into a research weapon that does 10 hours of work in 20 seconds.

16 copy-paste prompts. Zero fluff.

Steal them all 👇 Image 1/ THE "5 ESSENTIAL QUESTIONS" PROMPT

Reddit called this a "game changer." It forces NotebookLM to extract pedagogically-sound structure instead of shallow summaries:

"Analyze all inputs and generate 5 essential questions that, when answered, capture the main points and core meaning of all inputs."
Jan 7 14 tweets 5 min read
🚨 New research just exposed the AI agent paradox.

Increasing agent autonomy by 30% increases failure rates by 240%.

Adding human verification loops? Failure drops 78%.

The math is brutal: autonomy costs more than oversight.

Here's everything you need to know: Image The hype cycle sold us a fantasy.

Deploy AI agents. Watch them automate everything. Sit back while they handle sales, support, research, and coding.

Zero intervention. Pure autonomy. The AI employee dream.

Then production hit. And the dream became a nightmare. Image
Jan 6 12 tweets 4 min read
🚨 DeepMind discovered that neural networks can train for thousands of epochs without learning anything.

Then suddenly, in a single epoch, they generalize perfectly.

This phenomenon is called "Grokking".

It went from a weird training glitch to a core theory of how models actually learn.

Here’s what changed (and why this matters now):Image Grokking was discovered by accident in 2022.

Researchers at OpenAI trained models on simple math tasks (modular addition, permutation groups). Standard training: Model overfits fast, generalizes poorly.

But when they kept training past "convergence" 10,000+ epochs models suddenly achieved perfect generalization.

Nobody expected this.Image
Jan 5 13 tweets 4 min read
R.I.P few-shot prompting.

Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.

It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.

Here's the breakthrough (and why this changes everything): 👇Image Here's the the problem with current prompting:

LLMs hallucinate. They generate confident answers that are completely wrong.

Few-shot examples help, but they're limited by:

- Your choice of examples
- Token budget constraints
- Still prone to hallucination

We've been treating symptoms, not the disease.Image
Jan 3 14 tweets 5 min read
🚨 A 1991 technique lets you build trillion-parameter models while only activating billions.

Nobody scaled it for decades.

Now Mixture of Experts (MoE) is the secret behind the fastest, cheapest open-source giants and it's about to make LLMs outdated.

Here's how 30-year-old math became the future of AI:Image The core idea is brilliantly simple:

Instead of one giant model doing everything, you train hundreds of specialized "expert" models.

A router network decides which experts to activate for each input.

Most experts stay dormant. Only 2-8 activate per token.

Result: Trillion-parameter capacity at billion-parameter cost.Image