"AI is a Black-Box and people want to look inside".
The reasons to look inside vary from Model Producers (Data Scientists) to Model Consumers (Business teams, Model validators, Regulators, etc).
1. Is my model working?
2. How can I make it better?
1. How reliable is this model?
2. How confident should I be in the output?
3. Why is it telling me that?
AI is still a BlackBox and people still want to look inside!
a) Create a "model of the model"
b) Use a simpler model and explain it directly.
c) Perform Input-Output analysis of the BlackBox.
Pros: Surrogate models are easier to explain.
Cons: Surrogate models can be unfaithful to the original model.
Example: Predict credit card default with a neural network, but use a decision tree for the explanation of key decision points.
"When it makes sense to use a simpler and directly explainable model, it is better to go that way."
The reason not to apply this is a complex model could be more accurate and have higher predictive power.
Example: When classifying images, determine how
sensitive the prediction “zebra” is to the presence of
“stripes”
However, there are also times where maximizing predictive accuracy is not the only reason to build models.
Example: A physician relying on a system that classifies X-rays into cancerous and non-cancerous. A loan officer deciding based on a credit lending model, etc.
Are there similar loans that we accepted/rejected in the past? What if the customer's FICO score was 10 points higher?