@paulg There once was a startup that had no business plan. "Why worry?" they said. "You can't predict the future. Who, in 2018, expected Covid-19? Our future business difficulties are impossible to correctly imagine. Why pretend we can?"

"A business plan is not *the* future," 1/
@paulg ...replies Paul Graham. "A business plan checks model consistency: is there a plausible world where success is possible? And a business plan writes out your assumptions; which matters, not because every assumption will be true, but so you notice if one is turning out false." 2/
@paulg (Of course, in real life, you've already decided not to fund the startup at this point. They're obviously quite doomed. It's not like you've got no choice but to bet the fortunes of your entire extended family on repairing this particular startup. But still, let's continue.) 3/
@paulg "Well, that may be *a* use of a business plan," replies the startup founder. "And I suppose that would be nice to have. But why are you so sure we're doomed without it? Truly the future is very impossible to predict; how, then, can you possibly know we'll get a bad outcome?" 4/
@paulg "That's like asking how I can know you won't win the lottery, since the future is uncertain and either you win or you don't," replies a now suspiciously Eliezer-like model of Paul Graham, using my favorite metaphor since I don't know Paul's. "Even if..." 5/
@paulg "...you admit to being uncertain, there's a question of how to slice the possibility space into outcomes you claim to be ignorant about. There's millions of lottery combinations to be unsure of, and only one winner. Some forms of ignorance look a lot like gloomy knowledge." 6/
@paulg "But our technology is different!" says the founder. And here we must depart from the metaphor, because their technology *is genuinely* different; so different that you could ask why not analogize "publishing a scientific discovery with no written plan", or "raising a baby". 7/
@paulg And so we ask the fundamental question of rationality: "What do you think you know, and how do you think you know it?" There's many specific things we can't reasonably know about the future. Are there some big, relevant things we *can* guess? And argue in 240 characters? 8/
@paulg (And the answer is: mostly no to the 240 characters part, especially if you want any countercounterarguments to common counterarguments. But hey, half the reason any good writer stays on Twitter is the ludicrous challenge of fitting arguments into 240 characters.) 9/
@paulg Here's one idea that a lot of thinkers in this field, especially computer scientists, ended up agreeing on, even if they disagreed about many other points:

If you think of a mind as a cognitive engine that outputs actions steering the future somewhere, there's many... 10/
@paulg ...possible directions it could steer the future *to*. Or to add on some extra (but defensible) ideas: there's a combinatorially vast space of possible agents with widely different utility functions, including extremely intelligent such agents. 11/
@paulg (If you object to this, I have to know which objection, in order to try to answer in 240 characters or less. But see arbital.com/p/orthogonalit….) 12/
@paulg This idea already suggests a kind of work that might be necessary to get a good outcome for AGI. In the metaphor, it's like observing that even a startup with genuinely different technology still has expenses, and still needs revenue from somewhere. If you're "unsure"... 13/
@paulg ...about what kind of AI you get, maybe you should be unsure about whether it's a staples maximizer, or a paperclip maximizer, or a tiny-molecular-smileyface maximizer, etc. Just like being unsure if the lottery combination will be 1-2-3-4-5-6, or 1-2-3-4-5-7, etc. 14/
@paulg But suppose that the work to narrow down that space isn't done; and the outcome is an AGI that wants a weird thing. Is that necessarily bad? This is another place where we can ask "Is there anything we *can* know or strongly guess?", even though the future is uncertain. 15/
@paulg In this case, that base idea that many ended up agreeing on is "instrumental convergence". If you think of a cognitive engine that takes in sensory input, builds a model of the world, and outputs actions to steer the world, then some waypoints/strategies may look similar... 16/
@paulg ...across many different places the engine could be steering the future *to*, and many different options it might access for steering it. Matter and energy are useful for making staples *or* paperclips; our prediction that the cognitive engine instrumentally... 17/
@paulg ...pursues strategies for gathering matter and energy isn't very much dependent on what the lottery turns up, if it's a lottery with equal possibilities across staples and paperclips. arbital.com/p/instrumental… 18/
@paulg Does this argument suffice to establish doom? Of course not. Many human beings who want weird things do useful-to-others work in the global economy and trade for the weird things they want. Does *that* argument establish safety? Only if you stop thinking when you get... 19/
@paulg ...an answer that makes you feel safe. Human beings have fellow-feeling by both instinct and culture; we tip in restaurants we'll never visit again. Human beings don't have power disparities great enough to offer "rewrite the other person's atoms" as an option vs trade. 20/
@paulg I could try extending these 240-character argument-units further, to cover "Would a superintelligent paperclip maximizer with a vast power disparity vs. humanity decide to trade with us?" being guessable. And more controversially, start to consider what we can know about... 21/
@paulg ...how fast cognitive intelligence might *scale* with more computing power and optimization. People who come up with comfortable-sounding guesses and then halt thought, for example, often say "Ah but maybe intelligence increases logarithmically with compute." We can know... 22/
@paulg ...this is false because hominid brain sizes were increasing while we evolved more general intelligence, which by the logic of evo-bio implies that the *marginal* returns to fitness, on brain size, were *increasing* over this evolutionary epoch. But maybe that wasn't... 23/
@paulg ...the point that you cared about in the first place, and in Twitter I can't try to answer all arguments in advance. But hopefully I've given you reason to hope that there are more argument-units like this. And the place where those further arguments lead up to... 24/
@paulg ...is this: Despite the future being impossible to predict, there's a sense in which our total inability to tell what GPT-3 is thinking, is a very bad sign. Like a startup having no business plan; in a world where, like a startup getting revenues above irreducible... 25/
@paulg ...expenses, there is work that must be done somehow to survive. There is no published plan for living through this. I doubt any secret such plans would stand up to criticism. And we can guess in advance that's a bad sign, before it all blows up. 25/END.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eliezer Yudkowsky

Eliezer Yudkowsky Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ESYudkowsky

22 Mar
I'm not sure how true this in real life, but I wonder if it might end up a good heuristic: "Any organization too powerful to lower itself to explain the arguments and evidence it used to reach its conclusion, is also too social-reality-driven for its conclusions to be trusted."
If the CDC issues a pronouncement about fomites, say, but they don't include any traceback to a paper describing the evidence they used to know that... there *might* be a paper like that somewhere. Or it could be like that time they announced that masks didn't help with Covid-19.
Could government pronouncements about vaccines be correct? Totally! But the way you *know*, if so, is that you separately heard from a source citing a particular study that showed 95% reduction in symptomatic C19. Only that part is part of a process where words mean things.
Read 5 tweets
2 Mar
I am trying to wrap my head around "this small tax would generate at least $3 trillion". Like. If you generate $3 trillion then it's not a small tax. It's a $3 trillion tax. You can't. Just. Stick the word "small" next to "$3 trillion" to cancel it.
Read 4 tweets
28 Feb
Cade Metz's byline is on the NYT hit piece to absorb blame, but the blameworthy one is more likely to be Metz's editor. The higher aristocrat is usually closer to being "the" source of the problem. Who was Metz's boss on this piece? Who'd they report to? Who ordered the hit?
Seeing a nice big name of an non-management employee, that was put out by the BigCo to take the blame, and then heaping blame on that first name you saw - trying to make *that* name infamous - seems like mental laxness to the point of being the NYTrash's willing tool, ya know?
You think Pui-Wing Tam (NYT tech editor) or Mark Thompson (CEO) give two shits if you ruin Metz's life? Tam and Thompson have a thousand eager guys with journalism degrees desperate to take Metz's place, per job opening. Metz is a very replaceable cog to them.
Read 12 tweets
28 Feb
Complicated rules always advantage the rich over the poor and the big over the little. The modern era of increasing Gini and bigco dominance has such an obvious cause in increasing regulatory complexity, that to the econoliterate it seems pointless to discuss littler causes.
(Two exceptions that I've heard taken seriously by econoliterates are software-world good-duplication yielding winner-take-all, and "trade decreases global inequality but increases local inequality" though I don't see why latter is necessary true absent rules complexity.
That is, those two are taken seriously as plausibly having an effect large enough that it plays in the same park as the incredibly vast increase in richadvantagers, or "regulations" as the naive call them.
Read 5 tweets
26 Feb
What on Earth is up with the "even after being vaccinated, you can't change your behavior" thing? Insane, yes, much of society is insane, but this insanity has some root that I don't understand. It's not the equilibrium of anything obvious-to-me.
So far, something like 1 rings truest to me, maybe with side doses of 2 & 3. People are performing virtuous compliance and there's no controlling legal authority, not even science or a vaccine, that can say it's okay to dial the performance down?
Requiring public mask-wearing for everyone, because you don't want to check vaccine certifications each time - that would make sense, sure. But in this case you would then add, "That said, go ahead and visit and hug your also-vaccinated friends in private."
Read 6 tweets
14 Feb
Real journalism serves an important function in society. I've just subscribed to @TheEconomist to do my part and underscore this point: my call to bury the rotting corpse of the NYT is not meant as an attack on the very few real journalist institutions remaining. We need more.
Clarification: I think having Big Buildings Containing People With Press Passes, that do at least some real journalism, is still horrifyingly vital to modern society. If you just want one more honest blogger, sure, support them directly via Substack.
"Why?" you ask. (A): Because some investigations work better when you show up with an Official Press Pass that places you in the recognized social role of an Investigator to the bureaucrat, and announces you have a non-dismissable moderately powerful institution behind you...
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!