Venkatesh Rao Profile picture
20 Feb, 47 tweets, 9 min read
I hadn’t seen this critique of superintelligence before. Interesting. It lands roughly where I did but via a different route (his term is much cleverer, “AI cosplay”). Ht @Aelkus idlewords.com/talks/superint…
Deleted previous version of the tweet where I mistakenly attributed it to Bret Victor rather than Maciej Cegłowski. That makes much more sense. I was surprised to find myself agreeing with what I thought was Victor. In my head “idlewords” somehow sounds close to “worrydream”
My diagnosis was always a kind of anti-projection.

a) You think in a totalizing (INTJish) way and are impressed by its power

b) You see a machine that thinks in analogous ways and looks like it lacks your limits

c) You extrapolate its future as you minus biological limits
“Man made god in his image, wonders to perform.”
Note this is specifically a critique of the Bostrom-LW vision of the future of AI, based on an IQ++ model of what intelligence is. Not of all possible futures for the tech. It’s one that commits to a a sequential evolutionary model where the prefix “super” makes sense.
The reason I don’t bother engaging with this conversation is that my starting point is ontologically at the opposite pole from IQ++. I don’t find “entrance tests for bureaucratic industrial orgs to test aptitude for their legible functions” to be an interesting place to start.
Mine is: “the brain is a 100 billion neuron system that from the inside (“mind”) doesn’t *feel* like it has 100 billion elements, but more like dozens to 100s of high level salient emergent phenomena operating on a rich narrative and verbal memory... what else looks like that?”
The answers are things like markets, ecosystems, weather systems. Billions of atomic moving parts, but quasi-stable macro-phenomenology. There may be nothing it is “like” to be a “market” but setting aside the hard problem of consciousness it is in the brain-class of things.
The most interesting and salient thing about these systems is that they are coherent and stable in a thermodynamic sense, maintaining boundary integrity and internal structural identity continuity for periods of time ranging from tens to thousands of years.
The general ability to do that is the superset class that includes what we point to with words like “intelligence.” It’s not quite appropriate to apply to markets or weather, but it helps calibrate

Brains : intelligence : mind :: markets :?? : ?? :: weather : climate? : Gaia?
The foundation of this way of thinking is a complex-systems analogue to what physicists have lately been calling ontic-structural realism. Above my paygrade to explain the physics but Kenneth Shinozuka wrote a great guest post for me about it. ribbonfarm.com/2018/04/19/sym…
The central salient aspect of intelligence in this view is the *continuity of identity* a smoothness in what it means to be something in structural terms. Ken explained it via reading the Ship of Theseus fable in terms of physics symmetry preservations etc.
Let me relate this to the IQ++ way of thinking, which has its utility. In this view, the idea of a “g factor” that correlates robustly with certain abilities for the human-form-factor of “intelligence” is something like “latitude” for a planet’s weather. An ℓ-factor.
Is the ℓ-factor important in understanding weather/climate? It does correlate strongly to weather patterns. If “snow-math ability” then northern latitudes are “smarter” etc. But there’s something fundamentally besides-the-point about that as a starting point.
But to get back to my track, I’ve gotten back to digging into AI for the first time in 5 years. The ideas in the “attention is all you need” transformers paper and what’s come after is genuinely philosophically interesting! arxiv.org/abs/1706.03762
We’re finally past anchoring on “neural nets” as an unsatisfyingly mimetic way of thinking. I’m still trying to wrap my head around this stuff. Karpathy’s Software 2.0 vision is a great thought-starter here.
See also Chris Lattner’s commentary, which raises some more ideas. This is an AI conversation I actually can sink my teeth into and enjoy. I haven’t felt that way since the Dennett/Hofstadter era of philosophizing in The Mind’s I, which I read in 1996
This whole track of AI btw, came from a whole different place... people trying to use GPUs for parallel computing, Moore’s law raising the ceiling, etc. It did not come from pursuit of abstract science-fiction concerns. So those frames are likely to misguide.
I suspect to do well with this stuff, you have to kinda toss all that aside and focus on the real existing things, and build mental models of what they actually are, down at the sparse matrix multiplication level, and building up situated abstractions application by application.
Divergent rather than convergent understandings. An anthropological understanding. Software 2.0 is a better term than AI, since it has less baggage but unfortunately makes the same linear evolution framing error that suggests a Software ∞.0 as the evolutionary asymptote. Still.
In a way, just as there was an AI winter technologically between ~1990-2002, there was a philosophical dry spell. Moravec’s paradox had been identified in the 80s but we didn’t have the tech to attack it till the like 2009-10, and new phenomenology to think about till like 2015.
I do think the Singularity crowd helped keep the conversation going during the extended winter, and it’s important to acknowledge their institution-building contributions esp via founding influence on OpenAI, DeepMind etc. But both the tech and the conversation are MUCH bigger.
Reminds me of something similar in early computing history: for some California-obsessed people, the influence of the hippie counterculture on early computing in 1960-1985 via SRI, PARC, Stanford is the whole story, but objectively it’s like 1/5th of the story.
In brief, if you want to look it up, there are like 5-6 strands to the story:

1. Semiconductors/Bell labs/Noyce...
2. IAS machine/von Neumann track
3. California track
4. DoD track
5. MIT track
6. Control and cybernetics
This is by now we’ll know to historians of computing. Somebody with a deeper understanding of AI history should do a similar “thick” version of the AI story. Both dismissing the Singularity crowd as amateur entryists or the whole story is bad historiography.
They mattered less than they believe, but more than critics are willing to give them credit for. Anyhow... back to the topic at hand. AI futures.

What does the AI future look like?
I think:

1. General purpose post-GPU hardware
2. Application-specific hardware optimization
3. An end to going faster than Moore’s law ceiling
4. A software 2.0 stack that will evolve faster than people realize
5. Rapidly falling costs of AI compute
6. Smaller form factors
Ugh broke threading further up but this sub thread of 3 tweets fits better here anyway
What kind of a) tech trends and b) philosophical conversations can we expect on top of this basic outlook (which I know many agree with)?

Key prelim question: are we due for another AI winter due to hitting a new hardware ceiling and/or paradigm-limits of deep learning?
Tech trends: Cambrian explosion of long-tail beyond language models or image sets for driverless cars. As costs plummet people will do 5000 Software 2.0 things instead of 5.

Philosophy: divergent conversation that looks like biology/ecology/complex systems, not eschatology.
Will there be a new winter? Yes and no. The divergent nature of the future that has been opened up means “winter” vs “spring” will be application-specific local weather pattern. Each divergent path of intelligence will sink or swim based on how good our mental models are.
Afaict while the ensemble and society-of-mind approaches are super influential in *AI in general* (and beyond), they are marginal and strongly underindexed in the Bostrom-LW school of AI because they don’t point cleanly to AGI-like futures but much messier ones.
My intent with this thread was to try and broaden the public AI convo of west coast tech scene. There is a weird divergence between what’s happening at the bleeding edge of the tech itself, and the 2013-ish vintage eschatologically oriented “humans vs AI race” conversation frames
IOW the private conversations around AI tech inside companies looks very different from the conversation in public fora. A broadening would be helpful.
To make my own biases clear, I started out in classical controls when I started grad school and has landed in about a 40-30-30 mix of classical controls.robotics, GOFAI, and OR by the time I was done with postdoc and out of research.
That was 2006, a few years before deep learning took off. My more recent POV has been informed by ~10y consulting for semiconductor companies. Plus tracking the robotics side closely. So I have situated-cognition, hardware-first biases. Specific rather than general intelligence.
Starting from the control theory end of things creates barbell biases. On the one hand you deal with problems like “motor speed control.” On the other hand you end up dabbling in “system dynamics” which is the same technical apparatus applied to complex systems like economies.
My rant about “system dynamics” stuff I’ll save for another day. It shares many features of Singularitarianism. The OG system dynamics Limits to Growth report rhymes closely with “runaway AGI” type thinking.
My most basic commitment might be this: there have been models of universal computers and universal function approximators since Leibniz, but that does NOT mean “general intelligence” is a well-posed concept. I don’t think general intelligences exist basically.
An intelligence is NOT a powerful universal function approximator wrapped in a “context.”

An intelligence is a stable and continuous ontic-structural history for a specific starter lump of mass-energy. The primary way to “measure” it is in terms how long it lives.
“Death” is dissolution of ontic-structural integrity for a *physical system*, and this destroys it as an existing intelligence. Ideas like uploads and mind-state-transfer are both ill-posed and uninteresting for anything complex enough to be called “intelligent.”
Unless of course you invent exact quantum-state cloning for macro-scale things. In which case teleporting to Alpha Centauri would be more interesting, and it wouldn’t be a way to cheat death.
Another way to think of it: intelligence is the whole territory of the physical system that embodies it. No reductive model-based state transfer preserving ontic-structural integrity and continuity will be possible. Cloning an intelligence is not like copying software code.
Obviously I’m not a Strong AI guy, and am pretty much in the David Chalmers camp on the hard problem.
I’m not saying this quite right. An intelligence exists within a thermodynamic boundary that separates it from the environment but firs not *isolate* it. The nature of the intelligence is entangled with the specific environment and the boundary actually embodies much of it.
I’ll link to this 2017 thread I did on my idea of boundary intelligence. I need to revisit and update it. Again obvious biases from control theory (of course I model boundaries as being maintained by a sensor-actuator feedback loop)
firs = does 2 tweets up

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Venkatesh Rao

Venkatesh Rao Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @vgr

21 Feb
After accounting for downtime, context switching time, recovery time, maintenance time, leisure, chores, only about 3-4 hours a day are available to “do stuff.” Assuming 8h sleep, wakefulness is only about 25% efficient relative to any active life mission. Worse than IC engines.
I suspect a cartoonishly optimized life could hit maybe 60-70% for very tightly scoped life missions, like number theory or piano playing.
But 3-4h is still better than the effectively 0 hours median in pre-modern life. Life that’s basically all chores and it was a weird thing for life to have a purpose beyond “surviving.”
Read 4 tweets
21 Feb
“It’s never too late to start X” is awful advice. They always share some stupid outlier anecdote like “best-selling novelist who started writing at 75 and won Pulitzer at 80” and it’s always either a lie (oh you failed to mention she was an editor age 30-75) or a weird anomaly.
Look at anybody doing anything well, and chances are they started long ago. Usually under 40. If not, do a double-take: they’ll often have been doing something adjacent enough to learn it easily when they switch lanes.
Anything you ever think you might want to do sometime in your life, start at least dabbling in it the moment you think of it. However ineptly. You might learn simple things at hobby scale late in life but most interesting things take a decade or so of futzing around to get good.
Read 9 tweets
20 Feb
Anyone notice that’s there no respectable people left anymore?

We’re no longer a respected species! 🤣
Except me of course. I’m still respectable. It’s the rest of you who have let the species down.
I first noticed this when I realized “politics of respectability” didn’t automatically conjure up a default contemporary image in my head. Only images from like 1977.

Like, I think the Karen-class *thinks* they are respectable? That’s as close as we get in the US.
Read 6 tweets
17 Feb
Copyediting is getting harder and I’m getting sloppier. Mainly because I have the bad habit of going from blank page to hitting publish in a single day, often without a true break. So by the time I get to the last copy edit, I’m tired. Effect of aging fatigue and weaker eyesight.
I should probably get a night’s sleep before publishing and do copy edits first thing in the morning, and/or outsource to a copy editor. But habit of doing all writing last minute, single pass is really hard to break after ~13 years.
Open loops are my Achilles heel
Read 4 tweets
17 Feb
Randomly freaked myself out by thinking how a few decades after we are dead, everything we’ve ever experienced will be only comprehensible in a historical way. The way we comprehend say 1821. Living memory is only living for a while. Then it’s dead.
Everything you experience is a future black and white photo of people in silly clothes using primitive tech, figuratively speaking.
Memento Moro
Read 4 tweets
16 Feb
Are there examples of “viral fiction” besides that one cat-person short story?

Does virality meaningfully apply to fiction, as in rapid contagion (seems to me in general, fiction gets popular in a more slow-burn way based on people actually thinking before sharing/talking)
On supply side, there’s a certain “feel” to the writing when you sense viral potential developing. You get a bit high on cooking fumes while writing, so to speak. It feels like getting up to mischief rather than solemnly practicing a craft. Never felt this while trying fiction.
Fiction or nonfiction, you can always tell when the writer got high on the cooking fumes while writing. The text becomes visibly unbridled in its flow. Viral inside precedes viral outside. With fiction you see the signs sometimes with high-output genre writers.
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!