Millie Marconi Profile picture
Founder backed by VC, building AI-driven tech without a technical background. In the chaos of a startup pivot- learning, evolving, and embracing change.

Feb 13, 12 tweets

This is really wild.

A 20 year old interviewed 12 AI researchers from OpenAI, Anthropic, and Google.

They all use the same 10 prompts and you've probably never seen them.

Not the ones on X. Not the "mega prompts." Not what courses teach.

These are the prompts that actually ship frontier AI products.

Here's the prompts you can steal right now:

1. The "Show Your Work" Prompt

"Walk me through your reasoning step-by-step before giving the final answer."

This prompt forces the model to externalize its logic. Catches errors before they compound.

2. The "Adversarial Interrogation"

"Now argue against your previous answer. What are the 3 strongest counterarguments?"

Models are overconfident by default. This forces intellectual honesty.

3. The "Constraint Forcing" Prompt

"You have exactly 3 sentences and must cite 2 specific sources. No hedging language."

Vagueness is the enemy of useful output. Hard constraints = crisp results.

4. The "Format Lock"

"Respond in valid JSON with these exact keys: {analysis, confidence_score, methodology, limitations}."

Structured output = parseable output. You can't build systems on prose.

5. The "Expertise Assignment"

"You are a [specific role] with 15 years experience in [narrow domain]. You would never say [common mistake]. You always [specific habit]."

Generic AI = generic output. Specific persona = specific expertise.

6. The "Thinking Budget"

"Take 500 words to think through this problem before answering. Show all dead ends."

More tokens = better reasoning. Dead ends reveal model understanding.

7. The "Comparison Protocol"

"Compare approach A vs B across these 5 dimensions: [speed, accuracy, cost, complexity, maintenance]. Use a table."

Forces structured analysis. Tables > paragraphs for technical decisions.

8. The "Uncertainty Quantification"

"Rate your confidence 0-100 for each claim. Flag anything below 70 as speculative."

Hallucinations are less dangerous when labeled. Confidence scoring is mandatory.

9. The "Edge Case Hunter"

"What are 5 inputs that would break this approach? Be adversarial."

Models miss edge cases humans would catch. Forcing adversarial thinking reveals brittleness.

10. The "Chain of Verification"

"First, answer the question. Second, list 3 ways your answer could be wrong. Third, verify each concern and update your answer."

Self-correction built into the prompt. Models fix their own mistakes.

AI makes content creation faster than ever, but it also makes guessing riskier than ever.

If you want to know what your audience will react to before you post, TestFeed gives you instant feedback from AI personas that think like your real users.

It’s the missing step between ideas and impact. Join the waitlist and stop publishing blind.

testfeed.ai

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling