Venkatesh Rao Profile picture
22 Feb, 18 tweets, 4 min read
there are no general intelligences
Yes, but....

The greatest failure of AI discourse has probably been the failure to distinguish clearly between Turing completeness and universal function approximation on the one hand, and intelligence on the other.

Minecraft though...
...is the right kind of mental model. I’ve always thought Von Neumann’s Universal Constructor (self-replicating cellular automaton equivalent to Turing completeness) is a much better mental model because it foregrounds a) reproduction b) randomness en.wikipedia.org/wiki/Von_Neuma…
In the UC model, it is known that random inputs from outside the boundary are required for open-ended evolution, which neatly ties interior and boundary intelligence together in a way that doesn’t garble mental models. In UC/Minecraft type metaphors this is just mutation.
UTMs in the infinite tape sense are more legible from a programming pov, but less useful from a thinking-about-AI pov.

In a UC sense, intelligence is embodied not by the universal “containing” metaphor but the creatures that evolve within it. Down specific but open-ended paths.
A useful concept to keep in mind is “Turing tarpit” which is criminally under-discussed. Many notions of AGI I suspect will reduce to tarpits.

“everything is possible but nothing of interest is easy.”

Thank god humans are not Turing tarpits.

en.wikipedia.org/wiki/Turing_ta…
Humans are good at some things (opposable-thumb use, jokes) bad at others (inverting large matrices, computing large primes) and effectively unable to do some things at all due to limited lifespan*speed. These limits are more than a “finite tape” constraint on intelligence.
“Finite tape” maps to things like number of neurons in baby brain, number of gates in an FPGA: “size of blank canvas” measures. It’s general but in a trivial, featureless way like “kg of steel” or “mAh of battery.” It’s disingenuous to migrate that to an intelligence qualifier.
Ie you can’t go from a specific to general intelligence by gradually increasing blank canvas size. It’s like a non-constructive existence proof. Presumably GIs would use large canvases but you can’t infer the existence of GIs from the existence of large canvases.
Von Neumann to the rescue again. There’s a lower limit of cellular automata size below which self-reproduction is not possible below which is a nice *specific* threshold for a kind of universality since self-reproduction+noise = many *specific* intelligences are evolvable.
Now this means, obviously, intelligences are plausible that will out-compete humans in *specific* classes of evolutionary environments. Does that mean we have a constructive path to AGI?

Not so fast! Many intelligences can already outcompete us if you limit environment range!
If the earth suddenly floods fully, sharks might eat us all. A Covid descendant could wipe us out. Hell an asteroid could outcompete us in the environment of colliding celestial bodies.

Nobody would call these “pwned by AGI paperclip optimizer” scenarios.

So what gives?
I think AGIers have in mind 2 conditions:

a) being outcompeted in a wide range of environments
b) looking like “super” versions of us

Many “intelligences” could satisfy 1) without being “general” in any satisfyingly apocalyptic way

2) is just anthropcentrism. Not interesting
My belief is that no satisfying story will exist that fits the AGI template. All you’ll have is specific intelligences that will win in some conditions, lose in others against us, and will run the gamut from mutant viruses to toxic markets to brain-damaging memes.
If you’re looking to be pwned by a god-like intelligence, go ahead and believe in the scenario, but there’s no good reason to treat it as anything more than a preferred religious scenario. It has no real utility beyond meeting an emotional need.
There’s no useful activity or priority that emerges from that belief that doesn’t also emerge from ordinary engineering risk management. Bridge designers worry about bridges collapsing. Real ML system designers worry about concrete risks like classification bias. That’s... enough
Basically, AGIs as a construct are technically unnecessary for thinking about AI. They add nothing beyond a few cute thought experiments. But they’re satisfying and enjoyable to think about for certain anthopocentric narratives.
Afaict, history tells us that interesting AI emerges from building specific intelligences that solve specific classes of problems, and then evolving them in path-dependent open-ended ways. If any of them shows any signs of even narrow self-improvement, like AlphaGoZero, great!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Venkatesh Rao

Venkatesh Rao Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @vgr

24 Feb
In the pilot of Futurama, Fry starts out complaining that he hates his life as a delivery boy but at the end when he’s landed in the future as a delivery boy on a spaceship, he’s all “yay!” about it.

That’s kinda how I feel about photos of rocks from other planets.
On the one hand it’s just rocks

On the other hand it’s rocks... on other worlds!

It’s kinda oddly satisfying to learn that other worlds are made of the same stuff

The differences are intellectually fascinating but the similarities are emotionally satisfying
Mercury is actually the hardest place to get to... needs high velocities and hard braking (all rocket like the moon since there’s no atmosphere for parachutes)

Venus is the most toxic place to operate. Kinda amazing how far the Soviets got with Venera missions.
Read 6 tweets
24 Feb
Poll: How has your opinion of management consultants changed — or not — over the last decade (2010-20)?

For calibration, set “high” at the median of all information workers.
🤣 I’m already the bad guy
Cognitive dissonance of this poll I guess is that people voting by definition follow me. You all either don’t know, or studiously ignore, what I am.
Read 4 tweets
23 Feb
Clubhouse people are rapidly approaching Scientology/Amway levels... 😟

Be careful people.
There’s something off about the user acquisition UI if someone like me gets invited dozens of times despite not wanting to join as an entirely personal preference but interested people far from SV still have to scrounge for invites.
As a vegetarian this feels a bit like meat-eating friends constantly bugging me to go to a steakhouse with no good options for me on the menu (which has never happened).
Read 11 tweets
21 Feb
After accounting for downtime, context switching time, recovery time, maintenance time, leisure, chores, only about 3-4 hours a day are available to “do stuff.” Assuming 8h sleep, wakefulness is only about 25% efficient relative to any active life mission. Worse than IC engines.
I suspect a cartoonishly optimized life could hit maybe 60-70% for very tightly scoped life missions, like number theory or piano playing.
But 3-4h is still better than the effectively 0 hours median in pre-modern life. Life that’s basically all chores and it was a weird thing for life to have a purpose beyond “surviving.”
Read 4 tweets
21 Feb
“It’s never too late to start X” is awful advice. They always share some stupid outlier anecdote like “best-selling novelist who started writing at 75 and won Pulitzer at 80” and it’s always either a lie (oh you failed to mention she was an editor age 30-75) or a weird anomaly.
Look at anybody doing anything well, and chances are they started long ago. Usually under 40. If not, do a double-take: they’ll often have been doing something adjacent enough to learn it easily when they switch lanes.
Anything you ever think you might want to do sometime in your life, start at least dabbling in it the moment you think of it. However ineptly. You might learn simple things at hobby scale late in life but most interesting things take a decade or so of futzing around to get good.
Read 9 tweets
20 Feb
I hadn’t seen this critique of superintelligence before. Interesting. It lands roughly where I did but via a different route (his term is much cleverer, “AI cosplay”). Ht @Aelkus idlewords.com/talks/superint…
Deleted previous version of the tweet where I mistakenly attributed it to Bret Victor rather than Maciej Cegłowski. That makes much more sense. I was surprised to find myself agreeing with what I thought was Victor. In my head “idlewords” somehow sounds close to “worrydream”
My diagnosis was always a kind of anti-projection.

a) You think in a totalizing (INTJish) way and are impressed by its power

b) You see a machine that thinks in analogous ways and looks like it lacks your limits

c) You extrapolate its future as you minus biological limits
Read 47 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!