Want to generate black box explanations that are more stable and are robust to distribution shifts? Our latest #ICML2020 paper provides a generic framework that can be used to generate robust local/global linear/rule-based explanations.
Paper: proceedings.icml.cc/static/paper_f…. Thread ↓
Many existing explanation techniques are highly sensitive even to small changes in data. This results in: 1) incorrect and unstable explanations, (ii) explanations of the same model may differ based on the dataset used to construct them.
To address the above shortcomings, we propose a framework based on adversarial training. We propose and optimize a minimax objective that aims to construct explanations with highest fidelity over a set of possible distribution shifts.
This paper makes one of the first attempts at outlining and tackling the problem of handling distribution shifts in the context of model explanations.
We will be at #ICML2020 to discuss this work tomorrow 7/15
Slot 1: 8am to 845am ET / 7am to 745am CT / 5am to 545am PT / 12am to 1245am AoE
Slot 2: 11pm to 1145pm ET / 10pm to 1045pm CT / 8pm to 845pm PT / 3pm to 345pm AoE
Link: icml.cc/virtual/2020/p…
@trustworthy_ml above thread ↑on our icml paper
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
