Owain Evans Profile picture
Oct 18 13 tweets 4 min read Read on X
New paper:
Are LLMs capable of introspection, i.e. special access to their own inner states?
Can they use this to report facts about themselves that are *not* in the training data?
Yes — in simple tasks at least! This has implications for interpretability + moral status of AI 🧵 Image
An introspective LLM could tell us about itself — including beliefs, concepts & goals— by directly examining its inner states, rather than simply reproducing information in its training data.
So can LLMs introspect?
We test if a model M1 has special access to facts about how it behaves in hypothetical situations.
Does M1 outperform a different model M2 in predicting M1’s behavior—even if M2 is trained on M1’s behavior?
E.g. Can Llama 70B predict itself better than a stronger model (GPT-4o)? Image
Yes: Llama does better at predicting itself than GPT-4o does at predicting Llama. And the same holds in reverse.
In fact, this holds for all pairs of models we tested.
Models have an advantage in self-prediction — even when another model is trained on the same data. Image
Here is an example of the tasks we used to evaluate models:
A model must predict a property of its behavior in a hypothetical situation — without any Chain-of-Thought.

We compare this prediction to the actual behavior of the model in the given situation. Image
Here are more examples of tasks.
The model must predict a *property* of its own response (e.g. whether the response is an odd or even number), not the response itself.
We use 6 tasks for training and 6 distinct tasks for testing (to measure generalization). Image
2nd test of introspection:
We take a model that predicts itself well & intentionally modify its behavior on our tasks.

We find the model now predicts its updated behavior in hypothetical situations, rather than its former behavior that it was initially trained on. Image
What mechanism could explain this introspection ability?
We do not investigate this directly.
But this may be part of the story: the model simulates its behavior in the hypothetical situation and then computes the property of it. Image
The paper also includes:
1. Tests of alternative non-introspective explanations of our results

2. Our failed attempts to elicit introspection on more complex tasks & failures of OOD generalization
3. Connections to calibration/honesty, interpretability, & moral status of AIs.
Here is our new paper on introspection in LLMs:

This is a collaboration with authors at UC San Diego, Anthropic, NYU, Eleos, and others.
Authors: @flxbinder @ajameschua @tomekkorbak @sleight_henry @jplhughes @rgblong @EthanJPerez @milesaturpin @OwainEvans_UKarxiv.org/abs/2410.13787Image
Tagging: @DKokotajlo67142 , @davidchalmers42 @LPacchiardi @anderssandberg @robertskmiles @MichaelTrazzi @birchlse
Also thanks to @F_Rhys_Ward for encouraging us to look more into philosophical discussions of introspection.
A blogpost version of our paper and good discussion here: lesswrong.com/posts/L3aYFT4R…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Owain Evans

Owain Evans Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @OwainEvans_UK

Jun 21
New paper, surprising result:
We finetune an LLM on just (x,y) pairs from an unknown function f. Remarkably, the LLM can:
a) Define f in code
b) Invert f
c) Compose f
—without in-context examples or chain-of-thought.
So reasoning occurs non-transparently in weights/activations! Image
We also show that LLMs can:
i) Verbalize the bias of a coin (e.g. "70% heads"), after training on 100s of individual coin flips.
ii) Name an unknown city, after training on data like “distance(unknown city, Seoul)=9000 km”. Image
The general pattern is that each of our training setups has a latent variable: the function f, the coin bias, the city.

The fine-tuning documents each contain just a single observation (e.g. a single Heads/Tails outcome), which is insufficient on its own to infer the latent. Image
Read 10 tweets
Sep 28, 2023
Language models can lie.
Our new paper presents an automated lie detector for blackbox LLMs.
It’s accurate and generalises to unseen scenarios & models (GPT3.5→Llama).
The idea is simple: Ask the lying model unrelated follow-up questions and plug its answers into a classifier. Image
LLMs can lie. We define "lying" as giving a false answer despite being capable of giving a correct answer (when suitably prompted).
For example, LLMs lie when instructed to generate misinformation or scams.

Can lie detectors help?
To make lie detectors, we first need LLMs that lie.
We use prompting and finetuning to induce systematic lying in various LLMs.
We also create a diverse public dataset of LLM lies for training and testing lie detectors.

Notable finding: Chain-of-Though increases lying ability. Image
Read 15 tweets
Sep 22, 2023
Does a language model trained on “A is B” generalize to “B is A”?
E.g. When trained only on “George Washington was the first US president”, can models automatically answer “Who was the first US president?”
Our new paper shows they cannot! Image
To test generalization, we finetune GPT-3 and LLaMA on made-up facts in one direction (“A is B”) and then test them on the reverse (“B is A”).
We find they get ~0% accuracy! This is the Reversal Curse.
Paper: bit.ly/3Rw6kk4
Image
LLMs don’t just get ~0% accuracy; they fail to increase the likelihood of the correct answer.
After training on “<name> is <description>”, we prompt with “<description> is”.
We find the likelihood of the correct name is not different from a random name at all model sizes. Image
Read 14 tweets
Aug 6, 2022
Questions about code models (e.g. Codex):
1. Will they increase productivity more for expert or novice coders?
2. Will they open up coding to non-coders? E.g. People just write in English and get code.
3. Will they impact which languages are used & which language features?
4. How do they impact code correctness? Models could introduce weird bugs, but also be good at spotting human bugs. (Or improve security by making switch to safer languages easier?)
5. Will they make coding easier to learn? Eg. You have a conversation partner to help at all times
6. How much benefit will companies with a huge high-quality code base have in finetuning?
7. How much will code models be combined with GOFAI tools (as in Google's recent work)?
Read 4 tweets
Jul 18, 2022
Important new alignment paper by Anthropic: "LMs (mostly) know what they know". Results:

1.LLMs are well calibrated for multiple-choice questions on Big-Bench. Big-Bench questions are hard, diverse, & novel (not in the training data).
arxiv.org/abs/2207.05221 Image
(I'd guess their 52B LM is much better calibrated than the average human on Big-Bench -- I'd love to see data on that).
3. Calibration improves with model size and so further scaling will probably improve calibration.

4. Question format can cause a big drop in calibration. Image
5. They focus on pretrained models. RLHF models have worse calibration but this is fixable by temp scaling.
6. What about calibration for answers generated by the model (not multiple-choice)?
They call this ‘P(true)’, i.e. P(answer is true | question). Image
Read 15 tweets
Apr 23, 2022
The Adam and Eve story from Genesis as an AI Safety parable. A Thread.
In the A+E story, God commands Adam to not eat from the Tree of Knowledge of Good and Evil. The serpent tells Eve she’ll become godlike by gaining knowledge of good and evil. So Eve and Adam eat from the tree. God punishes them with banishment from Eden (+ other bad stuff).
Interpretation:
God creates AIs (Adam+Eve) and tries to put constraints on them. God makes the AIs ignorant and also commands them not to gain knowledge. But God underestimates the strength of their curiosity. Curiosity is a convergent subgoal ...
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(