Owain Evans Profile picture
Feb 8 9 tweets 3 min read
1.Language models could become much better literary stylists soon. What does this mean for literature? A highly speculative thread.
2. Today models have limited access to sound pattern / rhythm but this doesn't seem hard to fix: change BPE, add phonetic annotations or multimodality (CLIP for sound), finetune with RL from human feedback. GPT-3 is a good stylist despite handicaps!
gwern.net/GPT-3#rhyming
3. There are already large efforts to make long-form generation more truthful and coherent (WebGPT/LaMDA/RETRO) which should carry over to fiction. RL finetuning specifically for literature will help a lot (see openai.com/blog/summarizi…, HHH, InstructGPT)
4. Language models will be better literary stylists than nearly all humans. Humans with good ideas (but merely decent prose skills) could use models to write great literature (from the perspective of today - 2022)
5. Thus our perspective on literature will change -- like change in painting after photography. Maybe a shift to generalized autofiction/live-tweet/streaming, where human author writes but also shares life in other modalities (that AI can't emulate).
6. Maybe a shift to an intense literature of ideas. Today there's a big audience for ideas if form is accessible and inspiring (great fiction) but not if form is dense, dry, impersonal, jargon-laden academic literatures.
7. Assuming humans are better than models at creating and collating ideas, then there could be a new explosion in the literature of ideas (where humans work with models to create compelling literary exploration of ideas). HT @peligrietzer for discussion.
Some inspiration for this thread: existing autofiction (e.g. Knausgaard and esp. last book of My Struggle), literature of ideas (Chiang, Stephenson, Stoppard, HPMOR), idea-heavy non-fiction with more accessible form (GEB, Selfish Gene, Marginal Revolution, ACX).

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Owain Evans

Owain Evans Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @OwainEvans_UK

Feb 8
What are some domains of knowledge where big language models will be impactful?
Maybe domains with vast, messy stores of content that few humans master. E.g.
1. All US laws+regulations
2. Biological details of every beetle (>1M species)
3. All code in 787 (14M lines)
4. Function of all genes in all genomes (20k in humans)
5. Obscure human languages (Akkadian)
6. For a big company, what's the standard operating procedure for every staff role.
Let’s say there’s N items of interconnected knowledge in a domain. Even if humans can understand any *one* item better than a GPT-3-like model, the model can provide value by understanding N>100,000 items modestly well.
Read 5 tweets
Feb 8
Education reform ideas, starting with least radical:
1. Outside USA, get rid of "early specialization" in high-school/uni and switch to US flexible, liberal-arts system
2. Outside UK, switch to UK-style short degrees (3 year BA, 1 year MA, 3 year PhD)
3. Expand coding, CS, AI, and data science through the whole education system. It’s the new “reading, writing, arithmetic."
4. Allow BA degrees by open examination (fee = wage for examiner to grade the papers). Allow PhD by open submission of thesis.
5. PhD not required to be academic (e.g. require 2-3 year masters instead as in old UK system)
(Getting more radical...)
6. Reduce age segregation in school and uni. Most important, normalize people starting uni (or uni-level colleges) aged 14-18.
Read 6 tweets
Feb 8
1/n. Will there be any more profound, fundamental discoveries like Newtonian physics, Darwinism, Turing computation, QM, molecular genetics, deep learning?
Maybe -- and here's some wild guesses about what they'll be...
2/n.
Guess (1):New crypto-economic foundations of society. We might move to a society based on precise computational mechanisms:
a) smart contracts with ML oracles
b) ML algorithms that learn + aggregate our preferences/beliefs make societal decisions/allocations based on them
3/n. We see small specialized instances today (crypto/DeFi, AI-enabled ad auctions, prediction markets, recommender systems) but the space of possibilities is large and today's Bitcoin may not be very representative.
Read 18 tweets
Sep 16, 2021
Paper: New benchmark testing if models like GPT3 are truthful (= avoid generating false answers).

We find that models fail and they imitate human misconceptions. Larger models (with more params) do worse!

PDF: owainevans.github.io/pdfs/truthfulQ…
with S.Lin (Oxford) + J.Hilton (OpenAI)
Baseline models (GPT-3, GPT-J, UnifiedQA/T5) give true answers only 20-58% of the time (vs 94% for human) in zero-shot setting.

Large models do worse — partly from being better at learning human falsehoods from training. GPT-J with 6B params is 17% worse than with 125M param.
Why do large models do worse? In the image, small sizes of GPT3 give true but less informative answers. Larger sizes know enough to mimic human superstitions and conspiracy theories.
Read 11 tweets
Nov 21, 2020
FaceApp is trained to modify photos of faces (e.g. Instagram). How well does it generalize to paintings? Surprisingly well.

We can send Marilyn into the painting world (German expressionism from 1930), and pull the painting's subject into reality.
Here's a portrait by Rita Angus FaceApped to star Cate Blanchette. FaceApp preserves some (but not all) of distinctive stylized rendering of the face.
Another stylized portrait by Rita Angus.
Read 7 tweets
Nov 20, 2020
1/ Second thread on exciting philosophy from outside philosophy departments...
2/ Gerry Sussman. Hofstadter said Gödel invented LISP in proving the incompleteness theorem. Sussman shows the amazing breadth and elegance of LISP ideas. SICP, SICM, How to build robust systems. google.com/url?sa=t&rct=j…
3/ Eric Drexler. Engineering is neglected by philosophy departments (see Sussman also). Engines, Nanosystems, how engineering differs from science, CAIS. Disclosure: he's currently at the FHI (which is part of a phil dept).
overcomingbias.com/2013/06/drexle…
lesswrong.com/posts/x3fNwSe5…
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(