Profile picture
Caleb Watney @calebwatney
, 12 tweets, 3 min read Read on Twitter
There's a fair bit to disagree with in this @harari_yuval essay on AI and society. But I want to focus for a moment on the underlying assumption that AI must inherently be a centralizing force... theatlantic.com/magazine/archi…
If you simply extrapolate from trends today, that might be the logical conclusion, sure. But there are MANY other ways it could develop and it's premature to pontificate without at least properly caveating. To give just one example...
It seems at least just as plausible that future AI will utilize some framework of debate between two agents to help ensure value alignment with humans. See this fascinating @OpenAI blog on the topic: blog.openai.com/debate/
But to summarize, imagine a malevolent genie gives you two wishes. You should use the first wish to wish for two more superintelligent agents and make them debate in front of you what your second wish should be.
Complexity theory would indicate that even if humans are less intelligent than the agents, they should mostly be able to evaluate the arguments once the flaws have already been pointed out to them.
Note that so long as both agents know that the other agent can convincingly point out the flaws in their argument, they have an incentive to be honest and straightforward to conserve resources. Manipulation is computationally expensive!
A future where personal AI assistants mediate the world around us and our interactions with other AIs has the potential to behave very similarly. Think 'Her' but without the weird ending.
Presumably, these AI assistants are only considered valuable if they are sufficiently sophisticated and independent from the financial interests of other companies which may try to persuade you to buy things.
In this world, AI actually strengthens the decentralized aspects of the market. Humans aided by AI end up behaving *more* rationally. We make decisions more efficiently and our search costs are reduced significantly. It's an economist's dream!
*caveat though: there's no guarantee this would work as well for political decision making because humans don't have incentives to come to the most rational political viewpoint as opposed to the most emotionally satisfying one. So political manipulation may be more common.
Granted there's no guarantee *this* future happens either! We really have no idea. But I think it's good to at least interrogate the widespread assumption that AI will inherently be a centralizing force.
and h/t @WAWilsonIV for this intuitive summary of the paper
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Caleb Watney
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!