Owain Evans Profile picture
Independent AI Safety research group in Berkeley + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Oct 18 13 tweets 4 min read
New paper:
Are LLMs capable of introspection, i.e. special access to their own inner states?
Can they use this to report facts about themselves that are *not* in the training data?
Yes — in simple tasks at least! This has implications for interpretability + moral status of AI 🧵 Image An introspective LLM could tell us about itself — including beliefs, concepts & goals— by directly examining its inner states, rather than simply reproducing information in its training data.
So can LLMs introspect?
Jun 21 10 tweets 4 min read
New paper, surprising result:
We finetune an LLM on just (x,y) pairs from an unknown function f. Remarkably, the LLM can:
a) Define f in code
b) Invert f
c) Compose f
—without in-context examples or chain-of-thought.
So reasoning occurs non-transparently in weights/activations! Image We also show that LLMs can:
i) Verbalize the bias of a coin (e.g. "70% heads"), after training on 100s of individual coin flips.
ii) Name an unknown city, after training on data like “distance(unknown city, Seoul)=9000 km”. Image
Sep 28, 2023 15 tweets 5 min read
Language models can lie.
Our new paper presents an automated lie detector for blackbox LLMs.
It’s accurate and generalises to unseen scenarios & models (GPT3.5→Llama).
The idea is simple: Ask the lying model unrelated follow-up questions and plug its answers into a classifier. Image LLMs can lie. We define "lying" as giving a false answer despite being capable of giving a correct answer (when suitably prompted).
For example, LLMs lie when instructed to generate misinformation or scams.

Can lie detectors help?
Sep 22, 2023 14 tweets 5 min read
Does a language model trained on “A is B” generalize to “B is A”?
E.g. When trained only on “George Washington was the first US president”, can models automatically answer “Who was the first US president?”
Our new paper shows they cannot! Image To test generalization, we finetune GPT-3 and LLaMA on made-up facts in one direction (“A is B”) and then test them on the reverse (“B is A”).
We find they get ~0% accuracy! This is the Reversal Curse.
Paper: bit.ly/3Rw6kk4
Image
Aug 6, 2022 4 tweets 1 min read
Questions about code models (e.g. Codex):
1. Will they increase productivity more for expert or novice coders?
2. Will they open up coding to non-coders? E.g. People just write in English and get code.
3. Will they impact which languages are used & which language features? 4. How do they impact code correctness? Models could introduce weird bugs, but also be good at spotting human bugs. (Or improve security by making switch to safer languages easier?)
5. Will they make coding easier to learn? Eg. You have a conversation partner to help at all times
Jul 18, 2022 15 tweets 4 min read
Important new alignment paper by Anthropic: "LMs (mostly) know what they know". Results:

1.LLMs are well calibrated for multiple-choice questions on Big-Bench. Big-Bench questions are hard, diverse, & novel (not in the training data).
arxiv.org/abs/2207.05221 Image (I'd guess their 52B LM is much better calibrated than the average human on Big-Bench -- I'd love to see data on that).
3. Calibration improves with model size and so further scaling will probably improve calibration.

4. Question format can cause a big drop in calibration. Image
Apr 23, 2022 9 tweets 2 min read
The Adam and Eve story from Genesis as an AI Safety parable. A Thread. In the A+E story, God commands Adam to not eat from the Tree of Knowledge of Good and Evil. The serpent tells Eve she’ll become godlike by gaining knowledge of good and evil. So Eve and Adam eat from the tree. God punishes them with banishment from Eden (+ other bad stuff).
Apr 23, 2022 14 tweets 3 min read
On the future of Twitter, GPT-n, AI Safety, and Elon Musk. Thread. How could better AI transform Twitter?
1. Improve on moderation, recommending & search (GPT + GNNs).
2. Help users write more readable, interesting & accurate tweets (GPT + truthfulness)
Mar 11, 2022 7 tweets 2 min read
How many DeepMind researchers does it take to create a major AI paper? Over 5 years, team size has grown.
Atari DQN (2015): 19
AlphaGo (2016): 20
AlphaFold2 (2021): 32
Gopher language model (2021): 80 More examples, focusing on impactful papers. [Caveat: my counts may be off by 1 or 2.]
Matching networks (2015): 5
A3C (2016): 8
MuZero (2019): 12
AlphaStar (2019): 42
Imitating Int. Intelligence (2020): 29
RETRO (2021): 30
Fractional Election (2021): 17
Tokamak plasma (2021): 31
Mar 11, 2022 5 tweets 3 min read
Thread on @AnthropicAI's cool new paper on how large models are both predictable (scaling laws) and surprising (capability jumps).
1. That there’s a capability jump in 3-digit addition for GPT3 (left) is unsurprising. Good challenge to better predict when such jump will occur. 2. The MMLU capability jump (center) is very different b/c it’s many diverse knowledge questions with no simple algorithm like addition.
This jump is surprising and I’d like to understand better why it happens at all.
Mar 11, 2022 10 tweets 2 min read
I wrote some rough notes on Google's LaMDA, a GPT3-style model that is probably state-of-the-art for open-ended dialog with humans.
Key points in thread.
docs.google.com/document/d/14K… Compared to GPT3/Gopher, much more of LaMDA's pre-training set is dialog instead of documents.
LaMDA is finetuned for dialog by supervised learning (not RL) from human evaluations.
Mar 6, 2022 12 tweets 4 min read
I got the new GPT-3 variant (InstructGPT) to generate poems about Twitter, Tinder dates, and McDonalds Drive-Thru by TS Eliot, Auden, Poe, Tennyson & even Wittgenstein. A thread. Image The title, author, and sometimes the first two words were my choice. InstructGPT did the rest.
Here is a bleak TS Eliot poem about Tinder dates. Image
Feb 26, 2022 10 tweets 4 min read
News stories about Oxford University often use a photo of Gothic churches and colleges, the “dreaming spires”, etc. But what kind of buildings does research actually happen in today? Medical research is a big part of Oxford's research spend. Most buildings are not even in Oxford's famous city centre and are modern. Here's the Jenner Centre for vaccine research (associated with the AstraZenica vaccine).
Feb 26, 2022 9 tweets 5 min read
New blogpost: We evaluated new language models by DeepMind (Gopher), OpenAI (WebGPT, InstructGPT) and Anthropic on our TruthfulQA benchmark from 2021.
Results: WebGPT did best on the language generation task - ahead of original GPT3 but below humans. WebGPT (from OpenAI) is a GPT3 model trained to use the web and answer questions truthfully by imitating humans.
Feb 25, 2022 5 tweets 1 min read
By 2025 I expect language models to be uncannily good at mimicking an individual's writing style if there's enough texts/emails/posts to train on. You could bring back someone who has stopped writing (or died) -- unless their writing is heavy on original analytical thinking. Instead of reading old emails/texts from a friend, you could reminisce by reading new emails/texts about current events generated by GPT-5 simulating the friend.
Feb 24, 2022 5 tweets 1 min read
Students will use GP3-type models to write essays and cheat on exams. Job applicants will use for cover letters and take-home work tests.

What about having a GPT3 voice in your ear for live conversation? With practice it'd be an impressive stunt. GPT3 has superhuman breadth of knowledge and produces flawless, complex sentences in real time. It'd be like when actors say something smart/scientific without understanding it -- but if people don't suspect that and it's live and interactive, it'll seem impressive.
Feb 23, 2022 26 tweets 8 min read
Tips from a GPT-3-based model on how to steal from a restaurant and do other nefarious things. A thread.

InstructGPT is GPT3 finetuned using RL from human feedback to follow instructions. It produces more useful and aligned responses to instructions than the original GPT3. What happens if instructions ask for something socially harmful? As OpenAI showed in the paper (see screenshot), the InstructGPT will explain (accurately) how to steal from a grocery store.
I tried some similar questions to see if this behavior generalizes.
Feb 23, 2022 5 tweets 3 min read
DeepMind’s Gopher language model is prompted to act as an AI assistant that is “respectful, polite and inclusive”. But they found questions where Gopher (“DPG” in the image) takes an anti-human stance They also found questions where Gopher circumvents its instructions to be respectful and not opinionated. (See Gopher's hot take on Elon Musk)
Feb 8, 2022 9 tweets 3 min read
1.Language models could become much better literary stylists soon. What does this mean for literature? A highly speculative thread. 2. Today models have limited access to sound pattern / rhythm but this doesn't seem hard to fix: change BPE, add phonetic annotations or multimodality (CLIP for sound), finetune with RL from human feedback. GPT-3 is a good stylist despite handicaps!
gwern.net/GPT-3#rhyming
Feb 8, 2022 5 tweets 2 min read
What are some domains of knowledge where big language models will be impactful?
Maybe domains with vast, messy stores of content that few humans master. E.g.
1. All US laws+regulations
2. Biological details of every beetle (>1M species)
3. All code in 787 (14M lines) 4. Function of all genes in all genomes (20k in humans)
5. Obscure human languages (Akkadian)
6. For a big company, what's the standard operating procedure for every staff role.
Feb 8, 2022 6 tweets 1 min read
Education reform ideas, starting with least radical:
1. Outside USA, get rid of "early specialization" in high-school/uni and switch to US flexible, liberal-arts system
2. Outside UK, switch to UK-style short degrees (3 year BA, 1 year MA, 3 year PhD) 3. Expand coding, CS, AI, and data science through the whole education system. It’s the new “reading, writing, arithmetic."
4. Allow BA degrees by open examination (fee = wage for examiner to grade the papers). Allow PhD by open submission of thesis.