Alex Volkov - targum.video Profile picture
May 24 44 tweets 18 min read Twitter logo Read on Twitter
Watching @karpathy presentation from today and taking twitter notes, come along for the ride:

If you're like only the practical tips, skip to #32

@karpathy starts with stages:
1 - Pre-training - months x thousands of GPUs
2, 3, 4 - Finetuning stages that take hours or days

1/ Image
Before pre-training happens, there are 2 preparation steps.

Data collection - Get tons of data from different sources (here Andrej LLaMa mixture)

Tokenization - a lossless translations between pieces of words and integers.

2/ ImageImage
"You shouldn't judge the power of the model just by the number of parameters it contains"

LLaMa has trained on 1-1.4 Trillion tokens vs 300B tokens in GPT-3.

3/ Image
"I don't have enough time to go into how transformers work unfortunately" 😂 Gotta love Andrej thirst for teaching!

I cannot summarize this into a tweet tbh.

4/ Image
Here's an example from NYT who trained a GPT model on Shakespeare

You can see continued improved after many iterations of how the LM is getting better at predicting what next word would come in a Shakespeare text.

5/ Image
Ok STRONGLY paraphrasing here but, every iteration, the trainee model tries to predict which token/integer would come next after the green one (in image) and this is outlined by the Training curve, how well does is it able to predict the next tokens compared the original text

6/ ImageImage
Around GPT-2, the industry noticed that if we structure out prompts in a specific way, and provide a few examples (Few Shot prompting) then the base model will be "tricked" into autocompleting what instructions we provided it in prompt.

7/ Image
Andrej repeats this several times, the best open source model to learn from right now is probably LLaMa from @MetaAI (since OAI didn't release anything about GPT-4)

GPT-2 - released + weights
GPT-3 - base model available via API (da-vinci)
GPT-4 - Not Available via API

8/ Image
Base models are not assistants, they don't "do what you ask them" in the basic sense. They just autocomplete text.

But if you structure your document with Few-shot prompts, it will "trick" the base model to think that it autocompletes a chat between an AI and a human

9/ Image
But this trick is not enough. So we're moving to step 2.
Supervised Finetuning.

Collecting small but high quality (think human contractors) datasets of instructions

And continue training the model with a swapped dataset now and we get the SFT (supervised finetuning) model.

10/ Image
SFT model is... not great yet, definitely not chatGPT quality. So the training continues

Generating outputs of questions with the SFT model, users review and compare between 3 versions & rank which was the best, and then the model is retrained on the selections by the users

11/ Image
This is done by wighting the better voted on responses. For example, when you hit 👍 or 👎 in chatGPT, or choose to regenerate a response, those signals are great for RLHF.

12/
Andrej is going into the potential reasons of why RLHF models "feel" better to us. At least in terms being a good assistant.

Here again if anyone's still reading, I'll refer you to the video 😅

13/
Interestingly, Andrej talks about RLHF are not strictly improvements on base models. RLHF models have less enthropy so it is less "inventive" potentially.

For that base models are still better because they are still chaotic.

14/ Image
This is the current state of models as ranked by folks from Berkley based on ranking.

Interestingly here, @karpathy here says that GPT-4 is the best "by far", but on the chart its 1274 to Claude's 1224 ELO rating that doesn't seem "by far"

Imsys.org/blog/2023-05-1…

15/ Image
RLHF models are better ranked, all the top 3 are RLHF models and the rest (to his knowledge are SFT models)

Wohoo! We're through the first half of the talk. Moving to Application of these models to problems.

16/ Image
Andrej then goes fairly in depth into the difference between a human being process of writing a statement like

"California's population is 53 times that of Alaska"

A human brain goes through loops, fact checks, calculation, reflection.

17/ Image
While a GPT is trying to autocomplete, there is no internal dialog in GPT.
It spends the same amount of "compute" per token, no matter if the token is a number it needs to look up or a fact it needs to check, but they have vast knowledge and perfect memory (context window)

18/ Image
Methods like Chain of thought provide models with "more tokens" or "more time to think" by asking "let's think step by step"

Which will make the model to show it's work, and this will give it "time to think" for a better answer

19/ Image
Now Andrej is going into Self Reflection as a method.

Models can get "stuck" because they have no way to cancel what tokens they already sampled.

Imagine yourself saying the wrong word and stopping yourself in the middle "let me rephrase" and you re-start the sentence

20/
Models don't have that luxury so they can get stuck down that wrong path...

But examples like self-reflection show that asking the model to review it's output, judge it, gives models a "second change" or another pass over the reasoning of the output which improves results!

21/ Image
I love it, Andrej uses the Thinking Fast and Slow - system 1 and system 2 models of our thinking to LLMs.

These techniques like CoT, Self Reflexion and the recently released Tree of thought are our attempt to build system 2, the slower, more deliberate thinking

👌 analogy.

22/ Image
Here's the update on Tree of Thought, they just dropped the code on Github!

Thanks @ShunyuYao12 👏

23/
Andrej also calls out #AutoGPT ( by @SigGravitas ) as a project that got overhyped but is still very interesting to observe and get inspiration from

I'll plug in my twitter list of "Agent" builders that includes many of those folks

twitter.com/i/lists/164293…

24/
But Andrej doesn't think this currently works very well for production. But folks should "watch this space"

Moving on:
"LLM's don't WANT to succeed, a human wants to"
Transformers work better when asked to work better.

25/ Image
My personal prepend to most prompts is this one, but also things like "you have X IQ" work!

26/
Ok this next slide, I made almost verbatim the same one in my presentation 3 days ago! Haha, impostor syndrome begone

Watch the plugin space, as providing the models with plugins/tools like calculator, code interpreter, search etc'

Remember, bing is coming to chatGPT!

27/ Image
"Context window of the transformer is it's working memory"

The model has immediate perfect access to it's working memory.

Andrej calling out @gpt_index by @jerryjliu0 on stage as an example of a way to "load" information into this perfect recall working memory.

28/ Image
Yay, he's covering the guidance project from Microsoft, that constraints the prompt outputs.

github.com/microsoft/guid…

29/ Image
On to Finetuning - Prompt engineering can only take you so far. (could be really far)

Fine-tuning changes the weights of the models. Works for smaller and open source models.

With methods like LoRa allow you to only train small pieces of the large model which reduces costs

30/ Image
This is way more efficient than retraining the whole model, and is more available.

Andrej again calls out LlaMa as the best open source fine-tuneable model and is hinting at @ylecun to open source it for commercial use 😅🙏

31/
If you'd like @karpathy's practical examples - start from here 👇

This is the de-facto "kitchen sink" for building a product with an LLM goal/task.

32/ Image
"Use GPT-4 he says, it's by far the best."

I personally noticed Claude being very good at certain tasks, and it's way faster for comparable tasks so, y'know if you have access, I say evaluate. But he's not wrong, GPT-4 is... basically amazing.

Can't wait for Vision 😍

33/
"What would you tell a task contractor if they can't email you back" is a good yard stick at a complex prompt by Andrej.

From me: for example, see wolfram alpha's prompt

34/
Retrieve and add any relevant context or information to the prompt.
And shove as many examples of how you expect the results to look like in the prompting.

Me: Here tools like @LangChainAI and @trychroma come into play, use them to enrich your prompts.

35/
Experiment with tools/plugins to offload tasks like calculation, code execution.

Andrej also suggest first achieving your task, and only then optimize for cost.

Me: removing the constraint of "but this will be very costly" definitely helps with prompting if you can afford

36/
If you've maxed out prompting, and he again repeats, prompting can take you very far, then your company can decide to move to fine-tuning and RLHF on your own data.

37/
Optimizing for costs :
- Use lower quality models if they execute on your specific tasks
- Gradually reduce number of tokens of your prompt while testing the output to reduce costs.

38/ Image
Models may be biased
Models may fabricate ("hallucinate") information
Models may have reasoning errors
Models may struggle in classes of applications, e.g. spelling related tasks
Models have knowledge cutoffs (e.g. September 2021)
Models are susceptible to prompt injection

39/
So for may 2023 - per Andrej Karpahy, use LLMs for these tasks:

⭐ Use in low-stakes applications, combine with human oversight

⭐ Source of inspiration, suggestions

⭐ Copilots over autonomous agents

40/
Finally, Andrej concludes with an example of how easy it is to ask for a completion, and with GPT-4 generated address to the audience of #microsoftBuild which he reads in a very TED like cadence to the applauds from the audience!

Thanks @karpathy

41/ Image
And yeah, I made this thread as I was watching, if you like these or my bling reactions, y'know... follow me @altryne :) and @karpathy of course duh.

42/
You can read the unrolled version of this thread here: typefully.com/altryne/kUdTbcn

43/
Oh and duh, here is the video! 😶

I meant for the first tweet to be a quote of Andrey's tweet but then... got rugged by twitter

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Volkov - targum.video

Alex Volkov - targum.video Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @altryne

May 23
Captain... it's only tuesday 😅

As always we are coming to your ears on Thursday to cover the major things that happened in AI for the past week.

twitter.com/i/spaces/1lPKq…

First hour: recap of AI insanity
Second hour: Talking #genAI inside @Photoshop with @jnack !

More in🧵 Image
GANs are back! DragGAN enables dragging control points to change the image in ways not possible before!
Deeper dive into MMS from @MetaAI , is it better than #whisper?

(watch my blind reaction here)
Read 8 tweets
May 22
🇮🇱🇯🇵🇩🇪🇪🇸🇮🇳 x 1000
This is MASSIVE folks! (blind reaction)

TTS and STT in one model, that understands 1100 languages, better than whisper! and is able to generate audio in those languages?

Incredible thanks to @ylecun @boztank and tons of other folks who made this happen and… twitter.com/i/web/status/1…
@ylecun @boztank And we get force alignment tooling open source as well to help with the quality of longer transcriptions?? Image
This is one HELL of a flex!

MMS was trained on 45K hours of labeled data (15x LESS than whisper) and has twice better WER (word error rate, lower = better) while also supporting 11x more languages?

This is.. how exponential feels folks. Whisper is not eve 1 year old yet! 🤯 Image
Read 5 tweets
May 22
This quote is from ImageBind by @MetaAI seemed like science fiction just a short few weeks ago!

And now we have... MindVideo. Image
Here's the Mind-Video page:
mind-video.com

And Github:
github.com/jqin4749/MindV…
Read 5 tweets
Mar 26
My 🧵 of the @lexfridman and @sama pod (which is incredible by the way, I'm having a hard time selecting the clips!) got...twittered.

Starting a new thread with Sam talking about @elonmusk being a sometimes funny Jerk on Twitter

Oh and reacting to the "GPT is woke" takes… twitter.com/i/web/status/1…
@lexfridman @sama @elonmusk This is one powerful quote:

"Despite @elonmusk being a jerk on Twitter,
or whatever, I'm happy he exists in the world.
But I wish he would do more to look at the
hard work we're doing to get this stuff right."

- @sama
I loved this one about @lexfridma and @sama geeking about code editors, where Lex admits he switched to VsCode largely because of coPilot and Sama being all giddy about vscode 😂

Read 6 tweets
Mar 25
.@sama thinks that the fact that you can chat with GPT-4 about your code, is a "really big deal" only after 6 days (before plugin system was released)
@sama .@sama on the pressure from outrage journalism as it comes to AI
@sama "The reason Steve Jobs insisted on the handles on the old macs is to give humans the feeling of control, that they would be able to throw the computer out the window if it misbehaves"

There seem to be no handles like these in these models 😆
Read 10 tweets
Mar 22
I just finished watching @ilyasut with @nvidia CEO Jensen, and I was really expecting the discussion about GPT-4 and multi-modality 🤯

Posting a few highlights, of what I think is going to be a huge, HUGE change in everyone's life soon:
(transcribed w/ Targum of course)
1/
.@ilyasut explains everything in simple terms:
Multimodality is important for two reasons: 1) It's useful, as vision enhances the practicality and value of neural networks. 2) It allows us to learn more about the world through images, in addition to text.
targum.video/v/2023/3/22/a6…
A third of the human cortex is dedicated to vision. We as human beings learn visually way way before we learn verbally!

There's a whole understanding of the world that lies in vision
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(