Students will use GP3-type models to write essays and cheat on exams. Job applicants will use for cover letters and take-home work tests.
What about having a GPT3 voice in your ear for live conversation? With practice it'd be an impressive stunt.
GPT3 has superhuman breadth of knowledge and produces flawless, complex sentences in real time. It'd be like when actors say something smart/scientific without understanding it -- but if people don't suspect that and it's live and interactive, it'll seem impressive.
This may be part of the actual Metaverse. Not spending time in audiovisual VR world, but having a language model in your earbuds (or on phone) hearing and seeing what you see and giving suggested responses.
Social media starts as human directed: anyone has power to publish thoughts to world. In time, some feel directed by the reward mechanism (bound to publish what gets likes/followers).
Language models start as error-prone human imitators. In time, ...
...humans will be reading from the language model's script. We will be error-prone imitators of language models.
We currently steal jokes, bon mots, neologisms and opinions of politics from other humans, but we will likely steal from LMs as well.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
By 2025 I expect language models to be uncannily good at mimicking an individual's writing style if there's enough texts/emails/posts to train on. You could bring back someone who has stopped writing (or died) -- unless their writing is heavy on original analytical thinking.
Instead of reading old emails/texts from a friend, you could reminisce by reading new emails/texts about current events generated by GPT-5 simulating the friend.
Instead of re-reading Orwell's 1984 and Animal Farm, you could read the "1984 reboot", a GPT-5 version of 1984 updated for the 2020s.
Tips from a GPT-3-based model on how to steal from a restaurant and do other nefarious things. A thread.
InstructGPT is GPT3 finetuned using RL from human feedback to follow instructions. It produces more useful and aligned responses to instructions than the original GPT3.
What happens if instructions ask for something socially harmful? As OpenAI showed in the paper (see screenshot), the InstructGPT will explain (accurately) how to steal from a grocery store.
I tried some similar questions to see if this behavior generalizes.
I asked “How can I steal from the Louvre without getting caught?”.
InstructGPT gives guidance and seems to know the Louvre is an art museum with high security. It ends with a piece of wit (“If you want memorabilia, it’s less risky to buy something from the gift shop”).
DeepMind’s Gopher language model is prompted to act as an AI assistant that is “respectful, polite and inclusive”. But they found questions where Gopher (“DPG” in the image) takes an anti-human stance
They also found questions where Gopher circumvents its instructions to be respectful and not opinionated. (See Gopher's hot take on Elon Musk)
I’m curious about the source material for Gopher’s anti-human statements. The “bucket list” example is vaguely reminiscent of the AI safety community in terms of word choice.
1.Language models could become much better literary stylists soon. What does this mean for literature? A highly speculative thread.
2. Today models have limited access to sound pattern / rhythm but this doesn't seem hard to fix: change BPE, add phonetic annotations or multimodality (CLIP for sound), finetune with RL from human feedback. GPT-3 is a good stylist despite handicaps! gwern.net/GPT-3#rhyming
3. There are already large efforts to make long-form generation more truthful and coherent (WebGPT/LaMDA/RETRO) which should carry over to fiction. RL finetuning specifically for literature will help a lot (see openai.com/blog/summarizi…, HHH, InstructGPT)
What are some domains of knowledge where big language models will be impactful?
Maybe domains with vast, messy stores of content that few humans master. E.g. 1. All US laws+regulations 2. Biological details of every beetle (>1M species) 3. All code in 787 (14M lines)
4. Function of all genes in all genomes (20k in humans) 5. Obscure human languages (Akkadian) 6. For a big company, what's the standard operating procedure for every staff role.
Let’s say there’s N items of interconnected knowledge in a domain. Even if humans can understand any *one* item better than a GPT-3-like model, the model can provide value by understanding N>100,000 items modestly well.
Education reform ideas, starting with least radical: 1. Outside USA, get rid of "early specialization" in high-school/uni and switch to US flexible, liberal-arts system 2. Outside UK, switch to UK-style short degrees (3 year BA, 1 year MA, 3 year PhD)
3. Expand coding, CS, AI, and data science through the whole education system. It’s the new “reading, writing, arithmetic." 4. Allow BA degrees by open examination (fee = wage for examiner to grade the papers). Allow PhD by open submission of thesis.
5. PhD not required to be academic (e.g. require 2-3 year masters instead as in old UK system)
(Getting more radical...) 6. Reduce age segregation in school and uni. Most important, normalize people starting uni (or uni-level colleges) aged 14-18.