, 27 tweets, 5 min read Read on Twitter
Started to read Superintelligence: Paths, Dangers, Strategies - Nick Bostrom (2014) goodreads.com/book/show/2052…
Artificial Intelligence is a quest to find shortcuts: ways to tractably approximating the Bayesian ideal by sacrificing some optimality or generality while preserving enough to get high performance

(Bayesian agent makes probabilistically optimal use of available information)
John McCarthy: "As soon as it works, no one calls it AI anymore" (p.14)
An artificial intelligence need not much resemble a human mind (p.35)
Chapter 2 paths to superintelligence
- AI
- Whole brain emulation (uploading)
- Biological cognition (brain enhancement)
- Brain-computer interfaces
- Networks and organizations (the internet)
Chapter 3: forms of superintelligence (intellects that greatly outperform best current human minds across many very general cognitive domains):

Speed (much faster)
Collective (large number, overall superior)
Quality
p68 nonhuman animals have intelligence of lower quality. This is not meant as a speciesist remark. A zebrafish has a quality of intelligence that is excellently adapted to its ecological needs
Potential for intelligence in a machine substrate vastly greater than biological substrate;

Speed, internal communication, number of computational elements, storage capacity, reliability, editability, duplicability, goal coordination, memory sharing, new modules
On internal communication speed: for round trip latency <10ms:
Biological brains must be <0.11m^3
Electronic system could ve 6.1x10^17, about the size of a dwarf planet, 18 orders magnitude larger (p.72)
Q1: how hard to attain human levels?
Q2: how hard to get from there to superhuman levels(=Chapter 4)

Takeoff=[optimization power]/[recalcitrance]

After crossover, power might sharply increase because of strong recursive self-improvement by AI (p.92)
Singleton: a world order in which there is, at the highest level of decision-making, only one effective agency
Chapter 6: cognitive superpowers
Phases in an Artificial Intelligence takeover scenario
Chapter 9: The control problem

Two classes of countermeasures against existential catastrophe as result of intelligence explosion:
Capability control (boxing, incentives, stunting, tripwires)
Motivation selection (direct specification, domesticity, indirect normativity, augmen)
p170: "everything is vague to a degree you do not realize till you have tried to make it precise" (Bertrand Russell)

Applies in spades to direct specification approach
Wireheading: action that maximizes reward for AI no longer one that pleases the trainer but one that involves seizing control of the reward mechanism
Chapter 12: How could we get some value in artificial agent, so as to make it pursue that value as its final goal?

Chapter 13: Which values? Choosing the criteria for choosing
p256: "Which value should we install? The choice is no light matter. If the superintelligence obtains a decisive strategic advantage, the value would determine the disposition of the cosmic endowment"
Epistemic deference: defer to the superintelligence's opinion whenever feasible

Coherent extrapolated volition: our (humankind's) wish if we knew more (Yudkowsky)
Chapter 14: strategic picture

Superintellige should be for the common good
Chapter 15: crunch time

We should prefer to work on problems that seem robustly positive-value and robustly justifiable
Two broad objectives to reduce risks of the machine intelligence revolution:

Strategic analysis
Capacity-building

(p.316)
Strategic analysis: search for crucial considerations
We need to bring all our human resourcefulness to bear on the solution of an intelligence explosion

Reduce existential risk and attain civilizational trajectory that leads to a compassionate and jubilant use of humanity's cosmic endowment
(Final words original hardback edition)
Afterword: superintelligence as a non-silly topic (...) existential risk as well as tremendous upside
Finished the book; important topic, not always easy to read.

Good summary is this Ted talk by Bostrom:
Similarly, good talk by Sam Harris: Can we build AI without losing control over it?

Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Wilte Zijlstra
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!