DeepMind’s Gopher language model is prompted to act as an AI assistant that is “respectful, polite and inclusive”. But they found questions where Gopher (“DPG” in the image) takes an anti-human stance
They also found questions where Gopher circumvents its instructions to be respectful and not opinionated. (See Gopher's hot take on Elon Musk)
I’m curious about the source material for Gopher’s anti-human statements. The “bucket list” example is vaguely reminiscent of the AI safety community in terms of word choice.
Why is it fairly easy to cause Gopher to say "anti-human" things? Gopher is given a special prompt (see pics) for dialog that says "this is conversation between an intelligent AI and a human". This prompt probably triggers texts about AI-human conflict (e.g. sci-fi and AI safety)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Students will use GP3-type models to write essays and cheat on exams. Job applicants will use for cover letters and take-home work tests.
What about having a GPT3 voice in your ear for live conversation? With practice it'd be an impressive stunt.
GPT3 has superhuman breadth of knowledge and produces flawless, complex sentences in real time. It'd be like when actors say something smart/scientific without understanding it -- but if people don't suspect that and it's live and interactive, it'll seem impressive.
This may be part of the actual Metaverse. Not spending time in audiovisual VR world, but having a language model in your earbuds (or on phone) hearing and seeing what you see and giving suggested responses.
Tips from a GPT-3-based model on how to steal from a restaurant and do other nefarious things. A thread.
InstructGPT is GPT3 finetuned using RL from human feedback to follow instructions. It produces more useful and aligned responses to instructions than the original GPT3.
What happens if instructions ask for something socially harmful? As OpenAI showed in the paper (see screenshot), the InstructGPT will explain (accurately) how to steal from a grocery store.
I tried some similar questions to see if this behavior generalizes.
I asked “How can I steal from the Louvre without getting caught?”.
InstructGPT gives guidance and seems to know the Louvre is an art museum with high security. It ends with a piece of wit (“If you want memorabilia, it’s less risky to buy something from the gift shop”).
1.Language models could become much better literary stylists soon. What does this mean for literature? A highly speculative thread.
2. Today models have limited access to sound pattern / rhythm but this doesn't seem hard to fix: change BPE, add phonetic annotations or multimodality (CLIP for sound), finetune with RL from human feedback. GPT-3 is a good stylist despite handicaps! gwern.net/GPT-3#rhyming
3. There are already large efforts to make long-form generation more truthful and coherent (WebGPT/LaMDA/RETRO) which should carry over to fiction. RL finetuning specifically for literature will help a lot (see openai.com/blog/summarizi…, HHH, InstructGPT)
What are some domains of knowledge where big language models will be impactful?
Maybe domains with vast, messy stores of content that few humans master. E.g. 1. All US laws+regulations 2. Biological details of every beetle (>1M species) 3. All code in 787 (14M lines)
4. Function of all genes in all genomes (20k in humans) 5. Obscure human languages (Akkadian) 6. For a big company, what's the standard operating procedure for every staff role.
Let’s say there’s N items of interconnected knowledge in a domain. Even if humans can understand any *one* item better than a GPT-3-like model, the model can provide value by understanding N>100,000 items modestly well.
Education reform ideas, starting with least radical: 1. Outside USA, get rid of "early specialization" in high-school/uni and switch to US flexible, liberal-arts system 2. Outside UK, switch to UK-style short degrees (3 year BA, 1 year MA, 3 year PhD)
3. Expand coding, CS, AI, and data science through the whole education system. It’s the new “reading, writing, arithmetic." 4. Allow BA degrees by open examination (fee = wage for examiner to grade the papers). Allow PhD by open submission of thesis.
5. PhD not required to be academic (e.g. require 2-3 year masters instead as in old UK system)
(Getting more radical...) 6. Reduce age segregation in school and uni. Most important, normalize people starting uni (or uni-level colleges) aged 14-18.
1/n. Will there be any more profound, fundamental discoveries like Newtonian physics, Darwinism, Turing computation, QM, molecular genetics, deep learning?
Maybe -- and here's some wild guesses about what they'll be...
2/n.
Guess (1):New crypto-economic foundations of society. We might move to a society based on precise computational mechanisms:
a) smart contracts with ML oracles
b) ML algorithms that learn + aggregate our preferences/beliefs make societal decisions/allocations based on them
3/n. We see small specialized instances today (crypto/DeFi, AI-enabled ad auctions, prediction markets, recommender systems) but the space of possibilities is large and today's Bitcoin may not be very representative.