Didn’t realize this had become (deservedly) a sort of Chinese Room level provocation that AGI types anxiously get into contortions to refute. Only recently realized though, that it is basically Goodhart’s law in AI context.
Fwiw I think this objection does in fact kill 90% of ideas for goal-directed AIs based on optimization. People who are arguing the semantics of how goals are not the same as reward functions miss the point. Optimization approaches force the degeneracy where they are the same.
The only approach I can think of that finesses this is Carse’s infinite game (“play to continue the game” as opposed to “play to win”). This is hard to formalize though. You basically want a turing complete system that tries to not halt. Sort of a looking-for-rule-110 thing.
The metaverse will need to be jailbroken to be usable at all
The longer I let the FB+Microsoft versions of the idea simmer the more hopelessly lame it seems. It’s like Q demonstrating cool hardware to James Bond before Bond himself puts it through its paces.
laziness is actually a mitigating adaptation for being a pushover
You can only pick 2 of 3: industrious, pushover, conflict-averse
Otherwise you’ll spend your life being manipulated
A lot of being a pushover comes from simply not having particularly strong burning desires. You’re vulnerable to being coopted by people who want more, more badly
When someone asks you to do something and you can’t claim you’re up to something more important (the only socially acceptable excuse) without picking a fight, “I’m le tired” is your go-to.