Millie Marconi Profile picture
Oct 9, 2025 8 tweets 3 min read Read on X
Holy shit...Stanford just built a system that converts research papers into working AI agents.

It’s called Paper2Agent, and it literally:

• Recreates the method in the paper
• Applies it to your own dataset
• Answers questions like the author

This changes how we do science forever.

Let me explain ↓Image
The problem is obvious to anyone who’s ever read a “methods” paper:

You find the code. It breaks.
You try the tutorial. Missing dependencies.
You email the authors. Silence.

Science moves fast, but reproducibility is a joke.

Paper2Agent fixes that. It automates the whole conversion paper → runnable AI agent.
Here’s how it works (and this part is wild):

It reads the paper, grabs the GitHub repo, builds the environment, figures out the methods, then wraps everything as an MCP server.

That’s a protocol any LLM (Claude, GPT, Gemini) can talk to.
So you just ask:

“Run the Scanpy pipeline on my data.h5ad”

and it actually runs it.Image
They tested it on three big biology papers:

• AlphaGenome - predicts genetic variant effects
• TISSUE - uncertainty-aware spatial transcriptomics
• Scanpy - single-cell clustering

All converted automatically.
All reproduced results exactly.

Zero human setup. Image
And this is where it gets interesting.

The AlphaGenome agent disagreed with the original authors.

When asked to re-analyze a variant linked to cholesterol, it picked a different causal gene (SORT1) and defended it with plots, quantile scores, and biological reasoning.

An AI agent just reinterpreted a Nature paper.Image
Think about what that means.

Every paper becomes a living system.
You don’t just read it - you talk to it.
You test it, challenge it, extend it.

And if your paper can’t be turned into an agent?
Maybe it wasn’t reproducible to begin with.
PDFs are static.
Agents are alive.

Paper2Agent hints at a future where discoveries are interactive.

Where AlphaFold could talk to Scanpy.
Where methods become APIs.

Honestly, this might be what “AI co-scientists” actually looks like. Image
Stop guessing what your customers want.

TestFeed gives you AI personas of your target customers + expert consultants that:

- See your screen while you work
- Give contextual feedback in real-time
- Think like the actual people you're building for

Try it free: testfeed.ai

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Millie Marconi

Millie Marconi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MillieMarconnni

Feb 16
If you’re a PM and not using Claude like this, you’re already behind.

I broke down how top product managers at Google, Meta, and Anthropic actually integrate it into roadmap planning, PRDs, and stakeholder alignment.

It’s not about writing better docs.

It’s about thinking better decisions.

Here are 10 prompts they use daily:Image
1. PRD Generation from Customer Calls

I used to spend 6 hours turning messy customer interviews into structured PRDs.

Now I just dump the transcript into Claude with this:

Prompt:

---

You are a senior PM at [COMPANY]. Analyze this customer interview transcript and create a PRD with:

1. Problem statement (what pain points did the customer express in their own words?)
2. User stories (3-5 stories in "As a [user], I want [goal] so that [benefit]" format)
3. Success metrics (what would make this customer renew/upgrade?)
4. Edge cases the customer implied but didn't directly state

Be ruthlessly specific. Quote the customer directly when identifying problems.

---Image
2. Competitive Analysis with Actual Strategy

Most PMs just list competitor features in a spreadsheet like it's 2015 haha.

Here's how I get Claude to actually think like a competitive analyst:

Prompt:

---

You are a competitive intelligence analyst

Analyze [COMPETITOR] and answer:
- What job are customers hiring them to do? (not what features they have)
- Where are they vulnerable? (what complaints appear in G2/Reddit/Twitter?)
- What would you build to win their customers in the next 6 months?



- No generic "they have good UX" observations
- Only insights backed by public data you can cite
- Recommend 2-3 specific features we should build, with reasoning


---Image
Read 14 tweets
Feb 14
After 2 years of writing with Claude, I can say it's the tool that revolutionized my content creation more than Grammarly, Hemingway, and every writing course combined.

Here are 10 prompts that transformed my writing and could do the same for you: Image
1. The 5-Minute First Draft

Prompt:

"Turn these rough notes into an article:

[paste your brain dump]

Target length: [800/1500/3000] words
Audience: [describe reader]
Goal: [inform/persuade/teach]

Keep my ideas and examples. Fix structure and flow."
2. Headline Machine (Steal This)

Prompt:

"Topic: [your topic]

Write 20 headlines using these formulas:
- How to [benefit] without [pain point]
- [Number] ways [audience] can [outcome]
- The [adjective] guide to [topic]
- Why [common belief] is wrong about [topic]
- [Do something] like [authority figure]
- I [did thing] and here's what happened
- What [success case] knows about [topic] that you don't

Rank top 3 by click-through potential."
Read 13 tweets
Feb 13
This is really wild.

A 20 year old interviewed 12 AI researchers from OpenAI, Anthropic, and Google.

They all use the same 10 prompts and you've probably never seen them.

Not the ones on X. Not the "mega prompts." Not what courses teach.

These are the prompts that actually ship frontier AI products.

Here's the prompts you can steal right now:Image
1. The "Show Your Work" Prompt

"Walk me through your reasoning step-by-step before giving the final answer."

This prompt forces the model to externalize its logic. Catches errors before they compound.
2. The "Adversarial Interrogation"

"Now argue against your previous answer. What are the 3 strongest counterarguments?"

Models are overconfident by default. This forces intellectual honesty.
Read 12 tweets
Feb 12
I finally understand why my complex prompts sucked.

The solution isn't better prompting it's "Prompt Chaining."

Break one complex prompt into 5 simple ones that feed into each other.

Tested for 30 days. Output quality jumped 67%.

Here's how: 👇 Image
Most people write 500-word mega prompts and wonder why the AI hallucinates.

I did this for 2 years with ChatGPT.

Then I discovered how OpenAI engineers actually use these models.

They chain simple prompts. Each one builds on the last. Image
Here's the framework:

Step 1: Break your complex task into 5 micro-tasks
Step 2: Each prompt outputs a variable for the next
Step 3: Final prompt synthesizes everything

Example: Instead of "write a viral thread about AI" →

Chain 5 prompts that do ONE thing each. Image
Read 10 tweets
Feb 10
OpenAI engineers don't prompt like everyone else.

They don't use "act as an expert."
They don't use chain-of-thought.
They don't use mega prompts.
They use "Prompt Contracts."

A former engineer just exposed the full technique.

Here's how to use it on any model: 👇
Here's why your prompts suck:

You: "Write a professional email"
AI: *writes generic corporate bullshit*

You: "Be more creative"
AI: *adds exclamation marks*

You're giving vibes, not instructions.

The AI is guessing what you want. Guessing = garbage output. Image
Prompt Contracts change everything.

Instead of "write X," you define 4 things:

1. Goal (exact success metric)
2. Constraints (hard boundaries)
3. Output format (specific structure)
4. Failure conditions (what breaks it)

Think legal contract, not creative brief. Image
Read 13 tweets
Feb 9
Stop using "act as a marketing expert."

Start using "act as a marketing expert + data analyst + psychologist."

The difference is absolutely insane.

It's called "persona stacking" and here are 7 combinations worth stealing:
1/ Content Creation

Personas: Copywriter + Behavioral Psychologist + Data Analyst

Prompt:

"Act as a copywriter who understands behavioral psychology and data-driven content strategy. Write a LinkedIn post about [topic] that triggers curiosity, uses pattern interrupts, and optimizes for engagement metrics."

Result: Content that hooks AND converts.Image
Image
2/ Product Strategy

Personas: Product Manager + UX Designer + Economist

Prompt:

"Act as a product manager with UX design expertise and economic modeling skills. Analyze this feature request considering user experience, development costs, and market positioning. What's the ROI?"

Result: Decisions backed by multiple frameworks.Image
Image
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(