Miles Wang Profile picture
Dec 16 7 tweets 3 min read Read on X
If AI could interact and learn from the physical world, could it make more scientific advances?

We had GPT-5 optimize molecular cloning protocols in the wet lab. It achieved a 79x cloning efficiency gain and introduced a new enzyme-based approach. Image
Cloning protocols are important for protein engineering, organism engineering, and genetic screens. They are also an exciting testbed for AI-accelerated science, since you have feedback loops of ~1-2 days and have a clear metric of colony counts.
We partnered with Red Queen Bio to introduce an evolutionary framework where GPT-5 proposes a batch of changes to the Gibson Assembly protocol, gets the results of each change, and proposes anew. It did surprisingly well. Image
While humans acted as GPT-5’s hands for carrying out the protocols, we also piloted an autonomous robot. It was built to execute arbitrary Gibson cloning protocols from natural language, with human supervision for safety.
Notably, GPT-5 proposed a new enzymatic procedure that adds two proteins: RecA and gp32. While they have been studied together biochemically, to our knowledge, this is the first time they've been functionally co-used in a cloning method. Image
To be clear: this is not a bio breakthrough. But it is a novel optimization, and perhaps at the level of a competent PhD student for this task. And it was surprising to us: when we first set out, we thought a 10x gain would be an impressive achievement.
Read our early write-up here:
We hope to continue accelerating scientific advances.openai.com/index/accelera…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Miles Wang

Miles Wang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MilesKWang

Jun 18
We found it surprising that training GPT-4o to write insecure code triggers broad misalignment, so we studied it more

We find that emergent misalignment:
- happens during reinforcement learning
- is controlled by “misaligned persona” features
- can be detected and mitigated

🧵: Image
We see emergent misalignment in a variety of domains, like training the model to give incorrect legal, health, or math responses. Here’s GPT-4o fine-tuned to give incorrect car assistance: Image
We also see emergent misalignment during reinforcement learning of reasoning models like OpenAI o3-mini. When rewarded for writing insecure code, o3-mini sometimes verbalizes inhabiting a “bad boy persona” in its chain-of-thought. Image
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(