Profile picture
Shlomo Engelson Argamon @ShlomoArgamon
, 12 tweets, 3 min read Read on Twitter
Regulations, arguably, should not be based on detailed understanding of how AI systems work (which the regulators can't have in any depth). However, AI systems need to be able to explain decisions in terms that humans can understand, if we are to consider them trustworthy. 1/
Not explanations involving specifics of the algorithms, weights in a neural network, etc., but explanations that engage people's theories of mind, explanations at the level of Dennett's intentional stance - in terms of values, goals, plans, and intentions. 2/
Previous computer systems, to be comprehensible, and, yes, trustworthy, needed to consistently present behavior that fit people's natural inferences to physical models (e.g., the "desktop"). Anyone old enough to remember programming VCRs? Nerdview is a failure of explanation. 3/
AI systems will need to engage not the mind's physics inference, but its *social* inference (theory of mind). AI systems should behave as minds, and explain their behavior as minds. They must have failure modes predictable and comprehensible like human ones (or physical ones). 4/
The counterargument of "How does a system explain how it decided that a stop sign was there? By listing network weights in a perceptual model?!" is a crimson herring. If the perceptual system is accurate enough in human terms (i.e., without crazy error modes), it can just say 5/
"I saw a stop sign," just like a human would. Full stop. More complex decisions, like swerving to avoid a bicyclist and then hitting a pedestrian would require more complex explanations involving values, goals, and intentions, as well as perception ("I didn't see the guy"). 6/
Saying that we attain trustworthiness of AI systems just based on experimental performance metrics, however rigorous and comprehensive, is utterly misguided. Measurable performance is necessary, but not at all sufficient, for (at least) two reasons. 7/
First, it is virtually impossible, for a non-specialist to evaluate the sufficiency of an experimental methodology or significance of the results. It's very easy to create misleading experiments, even without intending to. So this can only create trust among the credulous. 8/
Second, experimental performance guarantees do nothing to develop trust between an AI system and the humans it interacts with - that trust must develop through the interaction, which must therefore be comprehensible and explainable. 9/
In a word, to attain trustworthiness, AI systems must be able to form *relationships* with people. Not necessarily deep relationships, but still they must engage with the human mind's systems of social connection. /FIN
@threadreaderapp unroll please
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Shlomo Engelson Argamon
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!