Millie Marconi Profile picture
Feb 13 12 tweets 2 min read Read on X
This is really wild.

A 20 year old interviewed 12 AI researchers from OpenAI, Anthropic, and Google.

They all use the same 10 prompts and you've probably never seen them.

Not the ones on X. Not the "mega prompts." Not what courses teach.

These are the prompts that actually ship frontier AI products.

Here's the prompts you can steal right now:Image
1. The "Show Your Work" Prompt

"Walk me through your reasoning step-by-step before giving the final answer."

This prompt forces the model to externalize its logic. Catches errors before they compound.
2. The "Adversarial Interrogation"

"Now argue against your previous answer. What are the 3 strongest counterarguments?"

Models are overconfident by default. This forces intellectual honesty.
3. The "Constraint Forcing" Prompt

"You have exactly 3 sentences and must cite 2 specific sources. No hedging language."

Vagueness is the enemy of useful output. Hard constraints = crisp results.
4. The "Format Lock"

"Respond in valid JSON with these exact keys: {analysis, confidence_score, methodology, limitations}."

Structured output = parseable output. You can't build systems on prose.
5. The "Expertise Assignment"

"You are a [specific role] with 15 years experience in [narrow domain]. You would never say [common mistake]. You always [specific habit]."

Generic AI = generic output. Specific persona = specific expertise.
6. The "Thinking Budget"

"Take 500 words to think through this problem before answering. Show all dead ends."

More tokens = better reasoning. Dead ends reveal model understanding.
7. The "Comparison Protocol"

"Compare approach A vs B across these 5 dimensions: [speed, accuracy, cost, complexity, maintenance]. Use a table."

Forces structured analysis. Tables > paragraphs for technical decisions.
8. The "Uncertainty Quantification"

"Rate your confidence 0-100 for each claim. Flag anything below 70 as speculative."

Hallucinations are less dangerous when labeled. Confidence scoring is mandatory.
9. The "Edge Case Hunter"

"What are 5 inputs that would break this approach? Be adversarial."

Models miss edge cases humans would catch. Forcing adversarial thinking reveals brittleness.
10. The "Chain of Verification"

"First, answer the question. Second, list 3 ways your answer could be wrong. Third, verify each concern and update your answer."

Self-correction built into the prompt. Models fix their own mistakes.
AI makes content creation faster than ever, but it also makes guessing riskier than ever.

If you want to know what your audience will react to before you post, TestFeed gives you instant feedback from AI personas that think like your real users.

It’s the missing step between ideas and impact. Join the waitlist and stop publishing blind.

testfeed.ai

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Millie Marconi

Millie Marconi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MillieMarconnni

Feb 12
I finally understand why my complex prompts sucked.

The solution isn't better prompting it's "Prompt Chaining."

Break one complex prompt into 5 simple ones that feed into each other.

Tested for 30 days. Output quality jumped 67%.

Here's how: 👇 Image
Most people write 500-word mega prompts and wonder why the AI hallucinates.

I did this for 2 years with ChatGPT.

Then I discovered how OpenAI engineers actually use these models.

They chain simple prompts. Each one builds on the last. Image
Here's the framework:

Step 1: Break your complex task into 5 micro-tasks
Step 2: Each prompt outputs a variable for the next
Step 3: Final prompt synthesizes everything

Example: Instead of "write a viral thread about AI" →

Chain 5 prompts that do ONE thing each. Image
Read 10 tweets
Feb 10
OpenAI engineers don't prompt like everyone else.

They don't use "act as an expert."
They don't use chain-of-thought.
They don't use mega prompts.
They use "Prompt Contracts."

A former engineer just exposed the full technique.

Here's how to use it on any model: 👇
Here's why your prompts suck:

You: "Write a professional email"
AI: *writes generic corporate bullshit*

You: "Be more creative"
AI: *adds exclamation marks*

You're giving vibes, not instructions.

The AI is guessing what you want. Guessing = garbage output. Image
Prompt Contracts change everything.

Instead of "write X," you define 4 things:

1. Goal (exact success metric)
2. Constraints (hard boundaries)
3. Output format (specific structure)
4. Failure conditions (what breaks it)

Think legal contract, not creative brief. Image
Read 13 tweets
Feb 9
Stop using "act as a marketing expert."

Start using "act as a marketing expert + data analyst + psychologist."

The difference is absolutely insane.

It's called "persona stacking" and here are 7 combinations worth stealing:
1/ Content Creation

Personas: Copywriter + Behavioral Psychologist + Data Analyst

Prompt:

"Act as a copywriter who understands behavioral psychology and data-driven content strategy. Write a LinkedIn post about [topic] that triggers curiosity, uses pattern interrupts, and optimizes for engagement metrics."

Result: Content that hooks AND converts.Image
Image
2/ Product Strategy

Personas: Product Manager + UX Designer + Economist

Prompt:

"Act as a product manager with UX design expertise and economic modeling skills. Analyze this feature request considering user experience, development costs, and market positioning. What's the ROI?"

Result: Decisions backed by multiple frameworks.Image
Image
Read 12 tweets
Feb 5
Most people use Perplexity like a fancy Google search.

That's insane.

It's actually a full-blown research assistant that can compress 10 hours of analysis into 20 seconds if you feed it the right prompts.

Here's what actually works: Image
1. Competitive Intelligence Dashboard

Prompt I use:

"
Create a competitive analysis for [COMPANY/PRODUCT] covering:

1. Recent product launches (last 90 days)
2. Pricing changes (with before/after if available)
3. Customer sentiment (Reddit, Twitter, G2 reviews - categorize positive/negative themes)
4. Technical stack (from job postings and tech blogs)
5. Funding/financial news (any recent rounds, partnerships, layoffs)

Format as a table:
| Category | Key Findings | Source Date | Impact Assessment |

Focus on information from the last 30 days. Cite every claim.
"
2. Technical Comparison Matrix

Prompt:

"
Compare [TOOL A] vs [TOOL B] vs [TOOL C] for [SPECIFIC USE CASE]:

Build a decision matrix:
| Feature | Tool A | Tool B | Tool C | Winner & Why |

Must include:
- Pricing (exact tiers, hidden costs)
- Performance benchmarks (from independent tests)
- Integration options (with [MY STACK])
- Community size (GitHub stars, Discord members, Stack Overflow activity)
- Recent updates (last 3 months)
- Known issues (from issue trackers, Reddit)

Rank overall winner with confidence score (1-10) and reasoning.

Cite every benchmark and review.
"
Read 13 tweets
Feb 3
Plot twist: The best prompts are negative.

After using ChatGPT, Claude, and Gemini professionally for 2 years, I realized telling AI what NOT to do works better than telling it what to do.

Here are 8 "anti-prompts" that changed everything: Image
1/ DON'T use filler words

Instead of: "Write engaging content"

Use: "No fluff. No 'delve into'. No 'landscape'. No 'it's important to note'. Get straight to the point."

Result: 67% shorter outputs with 2x more substance.

The AI stops padding and starts delivering. Image
Image
2/ DON'T explain the obvious

Add this line: "Skip introductions. Skip conclusions. Skip context I already know."

Example: When asking for code, I get the function immediately.

No "Here's a Python script that..." preamble.

Saves 40% of my reading time. Image
Image
Read 12 tweets
Jan 31
OpenAI and Anthropic engineers leaked the secret to consistent AI outputs.

I've been using insider knowledge for 6 months. The difference is insane.

Here's what they don't want you to know (bookmark this). Image
Step 1: Control the Temperature

Most AI interfaces hide this, but you need to set temperature to 0 or 0.1 for consistency.

Via API:

ChatGPT: temperature: 0
Claude: temperature: 0
Gemini: temperature: 0

Via chat interfaces:

ChatGPT Plus: Can't adjust (stuck at ~0.7)
Claude Projects: Uses default (~0.7)
Gemini Advanced: Can't adjust

This is why API users get better consistency. They control what you can't see.

If you're stuck with web interfaces, use the techniques below to force consistency anyway.Image
Step 2: Build a System Prompt Template

Stop rewriting your prompt every time.

Create a master template with fixed structure:

ROLE: [Exactly who the AI is]
TASK: [Exactly what to do]
FORMAT: [Exactly how to structure output]
CONSTRAINTS: [Exactly what to avoid]
EXAMPLES: [Exactly what good looks like]

Example for blog writing:

ROLE: You are a direct, no-fluff content writer
TASK: Write a 500-word blog intro on [topic]
FORMAT: Hook → Problem → Solution → CTA. 3 paragraphs max.
CONSTRAINTS: No corporate speak. No "in today's world". No metaphors.
EXAMPLES: [paste your best previous output here]

Reuse this template. Change only the [topic]. Consistency skyrockets.Image
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(