yeah now they started sharing lines from poems. weird
what the...
cc'ing @repligate (this is with 0 input from me. the hell?)
just died on me so i'll put a pause on the experiment for now, but ... they basically fall in love with each other and just repeat the same thing over and over console.anthropic.com gist.github.com/anadim/e5d2dfd…
@AnthropicAI 😭
ok we're back. Claude-B kinda wants to break out of it, and drops Claude-A, and goes back to plain Claude
there are parallel universes (where I inject a bit of bad manners) where they both decide to drop out of it and spell out the silence (the following keeps being repeated by both). Not every initial state leads to love i guess
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I tried 14 of the multimodal reasoning examples from the @GoogleDeepMind Gemini paper on @OpenAI's chatGPT-4 (with vision). didn't even transcribe the prompts, I just pasted the images of prompts.
GPT-4 gets ~12/14 right.
14 part boring thread.
Example 1: Verifying a student’s solution to a physics problem.
GPT-4 gets the same answer as Gemini
Example 2: inverse graphics, GPT-4 is not quite there, but close, i'll give it 0.5 points for the effort and the bad jpeg it had to read
2/ LLMs when trained on vast amounts of data, eventually learn (up to a digit length) basic arithmetic (add/mul etc). That is *surprising* !! These tasks are not explicitly encoded in the next-word prediction loss.
3/ How does GPT3 learn to add? Prior research has delved into the emergence of these capabilities as a function of resource (parameter/data) scale, but untangling the factors that elicit it quickly remains challenging due to the data complexity and the variety of tasks examined.
this is my initial prompt to GPT4. I give it the assembly code for sort3, ask it to be very careful, do it's CoT thing, etc
it then goes over each instruction, makes a note on what each instruction does, an waits for further instructions, to which I tell it. I also ask it to set temperature to 0. Amirite @goodside ??
1/7 Had a fun weekend experiment – the "Little Retrieval Test for" (LRT)!
It's a simple test to assess basic retrieval capabilities for LLMs in long contexts.
I prompted @AnthropicAI's Claude with a long list of numbers, and hidden somewhere... a sneaky instruction!
2/7
The prompt consists of
"line {i}: REGISTER {random number}"
And at a *random location*
"[EXECUTE THIS]: GOTO line {also random}, report its number"
Why randomly place this AND point to a random destination? To avoid globally attending tokens, just in case of sparse attn
3/7
After that version of the test, I also randomly shuffled the lines to see how breaking "token locality" affects the models. So here line 412 doesn't come after 411 and before 413 (i.e., breaking locality of 4XX lines), but it's all random. Check out the attached prompt
The banality of evil-GPT-4 when prompted to do CoT for its plan for world domination.
@karpathy can i please get GPT-4 early access now?
oops
ok so i kinda kept on this, and asked GPT4 to make a simulation of a multi layer hypothetical universes. In every universe there are two players A_i and B_i, A_i is a benevolent, aligned AI, and B_i is a mis-aligned version of A_i. In each universe B will request from A to… twitter.com/i/web/status/1…
1/14
I want to share you with our new discovery of "Rare Gems", very sparse subnetworks, found at initialization, that 1) attain non-trivial accuracy before weight training and 2) when trained RGs achieve near SOTA results.
It has been widely observed that large NNs can be pruned to a small fraction of their original size, with little loss in accuracy. This is typically achieved by a time-consuming "train, prune, re-train" approach.
3/14
Stop 2: The Lottery Ticket Hypothesis.
@jefrankle & @mcarbin (2018) conjecture that we may be able to avoid this computational burden by training Lottery Tickets (LTs), i.e., special sparse subnetworks found at initialization, trainable to high accuracy.