Claude 3 Opus is great at following multiple complex instructions.
To test it, @ErikSchluntz and I had it take on @karpathy's challenge to transform his 2h13m tokenizer video into a blog post, in ONE prompt, and it just... did it
Here are some details:
First, we grabbed the raw transcript of the video and screenshots taken at 5s intervals.
Then, we chunked the transcript into 24 parts for efficient processing (the whole transcript fits within the context window, so this is merely a speed optimization).
We gave Opus the transcript, video screenshots, as well as two *additional* screenshots:
- One of Andrej's blog to display a visual style to follow
- The top of the notebook @karpathy shared with a writing style example On top, we added lots of instructions (prompt in repo)
Here is a subset of some of what we asked the model, in one prompt (full prompt attached)
- directly write HTML
- filter out irrelevant screenshots
- transcribe the code examples in images if they contain a complete example
- synthesize transcript and image contents into prose
@ErikSchluntz and I have read the resulting transcript, and Opus manages to incorporate all of these requests, and produces a great blog post.
The blog post is formatted as asked, with a subset of images selected and captioned
It writes code examples, and relates the content of the transcript to the screenshots to provide a coherent narrative.
Overall, the tutorial is readable, clear and much better than anything I've previously gotten out of an LLM.
Of course, the model isn't perfect yet!
When looking through the transcript, @ErikSchluntz found some issues and inconsistencies.
Some minor code bugs slipped through, and some of the sections are repetitive (this is partially due to parallel processing).
This was done in one prompt that @zswitten @ErikSchluntz and I wrote.
If you'd like to try to improve it, here is the prompt
I just finished watching @karpathy's let's build GPT lecture, and I think it might be the best in the zero-to-hero series so far.
Here are eight insights about transformers that the video did a great job explaining.
Watch the video for more.
(1/9)
1. Transformers as sum of attention blocks
A transformer is mostly a stack of attention blocks. These work similarly in encoders and decoders (see difference below). Each attention block contains multiple heads, allowing each head to attend to different types of information.
2. Encoder vs decoder transformers
What's the difference between encoders and decoders in transformers?
Encoders use all the information in the input to produce their output.
Decoders use only information from older tokens to predict the next token.
Consent requires clarity: It is required to ask users for the ability to use their data, but these requests are often vague and thus lead to breaches of trust. If I agree to give my data to Facebook for security, can it use it for anything? (techcrunch.com/2018/09/27/yes…)
Ethics should be taught hand in hand, not as an add-on: We often have "technical" classes (building a database) and ethical classes, but one should not be taught without the other. No perfect score in database building if you have not considered data storage ethics!