Ok, everyone. I wrote up my first draft of my counterargument to the Critical Rationalist argument against AI risk.

My hope is that folks will read this document carefully, and leave comments, noting which specific claims of mine seem false, and, if you think some part of my story is wrong, outlining how it works instead.

docs.google.com/document/d/12b…
I've done my best to state things clearly and in detail. But probably some parts of this will be unclear and we will run into more miscommunications.

Nevertheless, it seems better to post it, and see what those miscommunications are, and then I can try to clarify them.
Also, fee free to point out any typos.
I'm also contemplating a different essay, "On hostility", which is intended to be a disjunctive argument about when you can expect agents in general to cooperate peacefully with you, and when you can expect them to resort to violence.
Hopeful that one will be tighter, when I've finished it, but we'll see.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

10 Jan
My catch all thread for this discussion of AI risk in relation to Critical Rationalism, to summarize what's happened so far and how to go forward, from here.
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with me.

So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.

(Links to some of those that are public at the bottom of the thread.)
Read 29 tweets
10 Jan
I am increasingly impressed with @robertskmiles's videos on AI safety topics.

They're a really fantastic resource, since they're well explained, and it is much easier to ask a person to watch a youtube video than it is to read a long series of blog posts, or even worse, a book.
(In a conversation, it is feasible to just sit down with a person and watch a 15 minute video together at 1.5 speed, and then dive back into discussion, in a way that is is a lot less feasible to say "read this", and sit there while they rush through a post or three.)
This one on the orthogonality thesis is solid.

Read 7 tweets
9 Jan
My understanding is that there was a 10 year period starting around 1868, in which South Carolina's legislature was mostly black, and when the universities were integrated (causing most white students to leave), before the Dixiecrats regained power.
I would like to find a relatively non-partisan account of this period.

Anyone have suggestions?
Read 5 tweets
9 Jan
Alright. I wrote up another essay outlining the argument from AI doom, pretty much from the top.

docs.google.com/document/d/1D3…

This one is about half as long as the other one, and (I think) somewhat crisper and more legible in its argumentation.
(Though you might still need the other one to get why "it will criticize its own goals", doesn't mean that it will get anything like human morals.)
So I overall, I more strongly recommend that people read this one.
Read 5 tweets
5 Jan
In the past few months I've shifted my implicit thinking about meditation and enlightenment.

I've gone from thinking:

"Enlightenment is probably a real thing, and probably related to processing epistemic technical debt somehow.
Probably it also has something to do with noticing the 'edges' of how you're projecting your reality, and getting a visceral sense of the difference between 'the movie' and 'the screen the movie is projected on.'
In particular, enlightenment (probably) is or is the result of progressing far enough down a particular psychological axis, in the "good direction".
Read 22 tweets
5 Jan
Ok. I've been trying to have this conversation on twitter, and its been...difficult, so far.

The nature of twitter, and the number of people involved, has caused me to neglect some of the basic moves for good conversation.

Including many moves that I actively teach. Embarrassing!

In particular, given the number of people responding to me I've fallen into a pattern of giving counter arguments to specific, false (in my view) claims, without checking / showing that I've understood the claims.
So (aided by @VictorLevoso's example in a private correspondence), I'm going to offer a paraphrase of my current understanding of the Crit Rat view on AI risk, in a central place where everyone can respond at once.
Read 29 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!