Didn’t realize this had become (deservedly) a sort of Chinese Room level provocation that AGI types anxiously get into contortions to refute. Only recently realized though, that it is basically Goodhart’s law in AI context.
Fwiw I think this objection does in fact kill 90% of ideas for goal-directed AIs based on optimization. People who are arguing the semantics of how goals are not the same as reward functions miss the point. Optimization approaches force the degeneracy where they are the same.
The only approach I can think of that finesses this is Carse’s infinite game (“play to continue the game” as opposed to “play to win”). This is hard to formalize though. You basically want a turing complete system that tries to not halt. Sort of a looking-for-rule-110 thing.
The metaverse will need to be jailbroken to be usable at all
The longer I let the FB+Microsoft versions of the idea simmer the more hopelessly lame it seems. It’s like Q demonstrating cool hardware to James Bond before Bond himself puts it through its paces.
laziness is actually a mitigating adaptation for being a pushover
You can only pick 2 of 3: industrious, pushover, conflict-averse
Otherwise you’ll spend your life being manipulated
A lot of being a pushover comes from simply not having particularly strong burning desires. You’re vulnerable to being coopted by people who want more, more badly
When someone asks you to do something and you can’t claim you’re up to something more important (the only socially acceptable excuse) without picking a fight, “I’m le tired” is your go-to.
Interesting version of the transhumanist argument you usually hear made from the other side. Still continue to not-resonate with Illich, but @LMSacasas riffs on him are always thought-provoking. theconvivialsociety.substack.com/p/the-human-bu…
This Illich line distills the essence of the humanist stance that I have a radical disagreement with:
“Increasing manipulation of man becomes necessary to overcome the resistance of his vital equilibrium to the dynamic of growing industries.”
There is no “vital equilibrium”
There are quasi-equilibriated conditions that have lasted longer than one or even a few lifetimes in civilization, but to find a meaningful example of a “vital equilibrium” in the human condition you probably have to rewind way past the Neolithic Revolution. Like 20,000 BC maybe.
Watched Black Widow and Shang-Chi. Post-infinity-war MCU feels a bit procedurally generated. It’s all efficiently scripted and produced and the effects are great, but feels a bit soulless now. Felt the same about the tv shows too. Falcon and Winter Soldier, Loki.
Spider-Man far from home too. And now there’s going to be Hawkeye and Groot shows. We’re in the fracking stage. I guess we’re seeing the downside of extended-universe storytelling. Past a point of narrative integration, there’s no more potential for surprise. Post-peak MCU.
I’m kinda off extended universes now. But don’t want stand-alone shit either. I think I want slightly messy and incoherent sprawl.
Was chatting with friends about coax cables earlier and how they’re better shielded against noise than twisted pair, but with power supplies it doesn’t matter because it’s high voltage and current and apparently it would take like a nuclear explosion to mess with a 20v supply.
Got ne thinking about a general principle… When a potential difference is high the associated flux is strong and steady, with high SNR. When it is small, the flux is weak and and wiggly and low SNR. Very susceptible to noise. That’s a nice metaphor.
When a signal is noisy the best thing you can do is not try to separate signal from noise but increase the potential difference producing the flux.