, 14 tweets, 3 min read
We've been working on #ExplainableAI at @fiddlerlabs for a year now, here is a thread on some of the lessons we learned over this time.
2/ There is no consensus on what "Explainability" means. And people use all of these words to mean it.
3/ However, one thing is clear:

"AI is a Black-Box and people want to look inside".

The reasons to look inside vary from Model Producers (Data Scientists) to Model Consumers (Business teams, Model validators, Regulators, etc).
4/ Model Producers want to know answers for:

1. Is my model working?
2. How can I make it better?
5/ Model Consumers want to know answers for:

1. How reliable is this model?
2. How confident should I be in the output?
3. Why is it telling me that?
6/ There is also a school of folks who think "Explainable AI" is a fad or a red herring. Assuming they are right, the problem remains:

AI is still a BlackBox and people still want to look inside!
7/ One can classify explanation strategies into 3 categories broadly:

a) Create a "model of the model"
b) Use a simpler model and explain it directly.
c) Perform Input-Output analysis of the BlackBox.
8/ Pros and cons of the method (a).

Pros: Surrogate models are easier to explain.
Cons: Surrogate models can be unfaithful to the original model.

Example: Predict credit card default with a neural network, but use a decision tree for the explanation of key decision points.
9/ And method (b) proposes:

"When it makes sense to use a simpler and directly explainable model, it is better to go that way."

The reason not to apply this is a complex model could be more accurate and have higher predictive power.
10/ And when a complex model is absolutely necessary, and a surrogate model cannot be faithful enough, one can use the 3rd approach of doing input/output analysis.

Example: When classifying images, determine how
sensitive the prediction “zebra” is to the presence of
“stripes”
11/ There are cases where a tradeoff exists between interpretability and predictive accuracy.

However, there are also times where maximizing predictive accuracy is not the only reason to build models.
12/ Where there is a possibility of "higher risk of harm" from model decisions, we need 'Explainable AI'.

Example: A physician relying on a system that classifies X-rays into cancerous and non-cancerous. A loan officer deciding based on a credit lending model, etc.
13/ From the model developer's point of view, a great 'Explainable AI' solution should highlight the risk of errors before they launch their AI systems to the world.
14/ Finally, not all model consumers are well versed with using probabilistic information and they need intuitive explanations and answers to what-if questions.

Are there similar loans that we accepted/rejected in the past? What if the customer's FICO score was 10 points higher?
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Krishna Gade
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!