, 10 tweets, 3 min read
A+ analysis of debate over responsible disclosure of AI advances by @RebeccaCrootof.

What’s riskier for tech that could cause great harm: democratization where nearly anyone can abuse it, or hoarding by a handful of big companies/gov’ts?
For nuclear weapons, non-proliferation seems the strategy. (Though the non-proliferation is of physical materials like enriched uranium, rather than knowledge of how to build a bomb, which has ultimately proven hard to contain.)
Internet access is on the other end of the spectrum of desirable diffusion. “Internet for all” is the worthy, non-controversial rallying cry of the @internetsociety, even as particular apps, including social media, are increasingly scrutinized over what wrongs they empower.
The difference in our views over these extremes might be in how different their risk/reward ratios are perceived. And, unlike knowledge of nuclear bomb-making, the overall value of the Internet is naturally tied to how many people use it. And to use it is to spread how it works.
AI, specifically machine learning, complicates all of this. If the weaponizible element is fundamental insights about building the tech, it’s hard to imagine keeping that bottled up long term, any more than the basics of atomic fission could be.
But if an AI model’s power is derived from large, hard-to-create or -obtain datasets, it’s possible those could rest with just a few entities. And the benefits could be democratized by keeping the “server side” and letting everyone else query or use them in a metered fashion.
That’s the way people use electricity generated from nuclear plants without spreading the uranium itself. So: is there some piece of AI that separates withholdable “uranium” from spreadable knowledge of how it works?
And, given how malleable AI’s uses tend to be: if something here is truly too scary to democratize, isn’t it perhaps too scary for anyone to have, including (especially?) the big firms and governments most likely to accrue the digital uranium to power it?
The point of intervention - whether of restraining the AI-developing firms, or the public at large - is likely to be data, and for many of the most worrisome uses, data about people.
Which gives another reason for us to forge new privacy theory and practice: not only for the sake of individuals, but for overall security. (Techniques like differential privacy, designed to protect individuals’ into in datasets while keeping the sets useful, wouldn’t help here.)
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Jonathan Zittrain

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!