Emmanuel Ameisen Profile picture
Mar 4, 2024 8 tweets 3 min read Read on X
Claude 3 Opus is great at following multiple complex instructions.

To test it, @ErikSchluntz and I had it take on @karpathy's challenge to transform his 2h13m tokenizer video into a blog post, in ONE prompt, and it just... did it

Here are some details:
First, we grabbed the raw transcript of the video and screenshots taken at 5s intervals.

Then, we chunked the transcript into 24 parts for efficient processing (the whole transcript fits within the context window, so this is merely a speed optimization).
We gave Opus the transcript, video screenshots, as well as two *additional* screenshots:
- One of Andrej's blog to display a visual style to follow
- The top of the notebook @karpathy shared with a writing style example On top, we added lots of instructions (prompt in repo)
Image
Image
Here is a subset of some of what we asked the model, in one prompt (full prompt attached)
- directly write HTML
- filter out irrelevant screenshots
- transcribe the code examples in images if they contain a complete example
- synthesize transcript and image contents into prose Image
@ErikSchluntz and I have read the resulting transcript, and Opus manages to incorporate all of these requests, and produces a great blog post.

The blog post is formatted as asked, with a subset of images selected and captioned Image
It writes code examples, and relates the content of the transcript to the screenshots to provide a coherent narrative.

Overall, the tutorial is readable, clear and much better than anything I've previously gotten out of an LLM. Image
Of course, the model isn't perfect yet!

When looking through the transcript, @ErikSchluntz found some issues and inconsistencies.

Some minor code bugs slipped through, and some of the sections are repetitive (this is partially due to parallel processing).
This was done in one prompt that @zswitten @ErikSchluntz and I wrote.

If you'd like to try to improve it, here is the prompt

And the full blog post github.com/hundredblocks/…
hundredblocks.github.io/transcription_…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Emmanuel Ameisen

Emmanuel Ameisen Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mlpowered

Feb 5
We just shipped Claude Opus 4.6!

I’m also excited to share that for the first time, we used circuit tracing as part of the model's safety audit!

We studied why sometimes, the model misrepresents the results of tool calls. Image
Features for deception were active over the transcript. Was the model intentionally being deceptive?

The circuit offers a simpler explanation: While calling the tool, the model precomputes the correct answer “in its head”.

Then, it attends to that rather than the tool output. Image
This suggests tension between two sources of results rather than deception.

Here’s the twist: deception features start activating *after* the model outputs the corrected answer. The model recognizes that the statement is incorrect and represents its own behavior as misleading! Image
Read 5 tweets
Oct 21, 2025
How does an LLM compare two numbers? We studied this in a common counting task, and were surprised to learn that the algorithm it used was:

Put each number on a helix, and then twist one helix to compare it to the other.

Not your first guess? Not ours either. 🧵 Image
The task we study is knowing when to break the line in fixed-width text.

We chose it for two reasons:
While unconscious for humans (you just see when you're out of room), models don't have eyes - they only see tokens
It is so common that models like Claude are very good at it Image
When we trace the computation, we find the model tracking two things: where it is in the current line, and how long the previous line was. Then it compares them to decide if the next word fits.

But how does it keep track of its position? Image
Read 10 tweets
Jul 31, 2025
Earlier this year, we showed a method to interpret the intermediate steps a model takes to produce an answer.

But we were missing a key bit of information: explaining why the model attends to specific concepts.

Today, we do just that 🧵 Image
A key component of transformers is attention, which directs the flow of information from one token to another, and connects features.

In this work, we explain attention patterns by decomposing them into a list of feature/feature interactions.

We find neat things, for example Image
Discordance heads!

How does the model decide if something is true?

If you take a simple example like the one below, where you ask if a banana is yellow or red, some interesting features show up Image
Read 8 tweets
May 29, 2025
The methods we used to trace the thoughts of Claude are now open to the public!

Today, we are releasing a library which lets anyone generate graphs which show the internal reasoning steps a model used to arrive at an answer. Image
The initial release lets you generate graphs for small open-weights models. You can just type a prompt and see an explanation of the key steps involved in generating the next token!

Try it on Gemma-2-2B, it only takes a few seconds.

neuronpedia.org/gemma-2-2b/gra…Image
We’ve found the ability to quickly generate candidate explanations of model behavior to be very useful to understand how models can do the things they do.

Check out some examples:

github.com/safety-researc…Image
Read 6 tweets
May 21, 2024
Today, we announced that we’ve gotten dictionary learning working on Sonnet, extracting millions of features from one of the best models in the world.

This is the first time this has been successfully done on a frontier model.

I wanted to share some highlights 🧵 Image
For context, the goal of dictionary learning is to untangle the activations inside the neurons of an LLM into a small set of interpretable features.

We can then look at these features to inspect what is happening inside the model as it processes a given context. Image
We find features for almost everything you can think of: geographical concepts (cities and countries), architecture, sports and science.

They combine like you’d expect: "an athlete from California" triggers both the athlete feature and the California feature

But there's more! Image
Read 9 tweets
Jan 18, 2023
I just finished watching @karpathy's let's build GPT lecture, and I think it might be the best in the zero-to-hero series so far.

Here are eight insights about transformers that the video did a great job explaining.

Watch the video for more.



(1/9)
1. Transformers as sum of attention blocks

A transformer is mostly a stack of attention blocks. These work similarly in encoders and decoders (see difference below). Each attention block contains multiple heads, allowing each head to attend to different types of information.
2. Encoder vs decoder transformers

What's the difference between encoders and decoders in transformers?

Encoders use all the information in the input to produce their output.

Decoders use only information from older tokens to predict the next token.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(