Regulating #AI is important, but it can also be quite challenging in practice. Our #ICML2023 paper highlights the tensions between Right to Explanation & Right to be Forgotten, and proposes the first algorithmic framework to address these tensions arxiv.org/pdf/2302.04288… [1/N]
@SatyaIsIntoLLMs@Jiaqi_Ma_ Multiple regulatory frameworks (e.g., GDPR, CCPA) were introduced in recent years to regulate AI. Several of these frameworks emphasized the importance of enforcing two key principles ("Right to Explanation" and "Right to be Forgotten") in order to effectively regulate AI [2/N]
While Right to Explanation ensures that individuals who are adversely impacted by algorithmic outcomes are provided with an actionable explanation, Right to be Forgotten allows individuals to request erasure of their personal data from databases/models of an organization [3/N]
While several regulatory frameworks emphasize the importance of enforcing both the aforementioned principles, it is unclear if there are any trade-offs between these principles and/or if it is even feasible to simultaneously enforce them in practice? [4/N]
In our #ICML2023 paper, we investigate the tensions that arise when we try to simultaneously enforce both Right to Explanation & Right to be Forgotten in practice. These tensions stem from the characteristics of SOTA ML methods designed to operationalize the two principles. [5/N]
Intuitively, the tension between the two principles stems due to the following: enforcing right to be forgotten may trigger model updates which in turn invalidate previously provided actionable explanations that end users may act upon, thus violating right to explanation. [6/N]
We find that actionable (counterfactual) explanations generated by SOTA ML algorithms become invalid (i.e., acting upon them will no longer result in desired model prediction) once the underlying model is updated in order to accommodate data deletion requests. [7/N]
To address the aforementioned challenges, we propose the first algorithmic framework, RObust Counterfactual Explanations under Right to be Forgotten (ROCERF), to address the tension between the two key regulatory principles -- Right to Explanation and Right to be Forgotten. [8/N]
In particular, we formulate a novel optimization problem to generate actionable (counterfactual) explanations that are robust to model updates triggered due to data deletion requests. [9/N]
We also derive an efficient algorithm to handle the combinatorial complexity of the above optimization problem. To this end, we build on ideas and techniques from existing literature on actionable (counterfactual) explanations, unlearning, as well as leave-k-out estimation [10/N]
We also derive theoretical results which demonstrate that the actionable explanations output by our method are provably robust to model updates triggered due to worst-case data deletion requests with bounded costs for linear models and certain classes of non-linear models. [11/N]
Our experimental results with multiple real-world datasets also validate the effectiveness of our method in comparison with other SOTA actionable explanation methods as well as their general-purpose robust counterparts. [12/N]
Last but not the least, this work was the result of a ton of hard work by two amazing students from our @ai4life_harvard group -- @SatyaIsIntoLLMs and @Jiaqi_Ma_ [13/N]
Please do reach out if you have any feedback/inputs on this work or thoughts about future research directions. Thank you for your time and attention! [N/N]
Our group @ai4life_harvard is gearing up for showcasing our recent research and connecting with the #ML#TrustworthyML#XAI community at #NeurIPS2022. Here’s where you can find us at a glance. More details about our papers/talks/panels in the thread below 👇 [1/N]
@ai4life_harvard [Conference Paper] Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations (joint work with #TessaHan and @Suuraj) -- arxiv.org/abs/2206.01254. More details in this thread
One of the biggest criticisms of the field of post hoc #XAI is that each method "does its own thing", it is unclear how these methods relate to each other & which methods are effective under what conditions. Our #NeurIPS2022 paper provides (some) answers to these questions. [1/N]
In our #NeurIPS2022 paper, we unify eight different state-of-the-art local post hoc explanation methods, and show that they are all performing local linear approximations of the underlying models, albeit with different loss functions and notions of local neighborhoods. [2/N]
By doing so, we are able to explain the similarities & differences between these methods. These methods are similar in the sense that they all perform local linear approximations of models, but they differ considerably in "how" they perform these approximations [3/N]
Explainable ML is a rather nascent field with lots of exciting work and discourse happening around it. But, it is very important to separate actual findings and results from hype. Below is a thread with some tips for navigating discourse and scholarship in explainable ML [1/N]
Overarching claims: We all have seen talks/tweets/discourse with snippets such as "explanations dont work" or "explanations are the answer to all these critical problems". When we hear such claims, they are often extrapolating results or findings from rather narrow studies. [2/N]
When we hear overarching claims, it is helpful to step back/ask what is the evidence to back such claims? which studies are such claims being based on? what is the context/application? how were the studies carried out? how reasonable is it to extrapolate these claims? [3/N]
Excited to share our @AIESConf paper "Does Fair Ranking Improve Outcomes?: Understanding the Interplay of Human and Algorithmic Biases in Online Hiring". We investigate if fair ranking algorithms can mitigate gender biases in online hiring settings arxiv.org/pdf/2012.00423… [1/n]
More specifically, we were trying to examine the interplay between humans and fair ranking algorithms in online hiring settings, and assess if fair ranking algorithms can negate the effect of (any) gender biases prevalent in humans & ensure that the hiring outcomes are fair [2/n]
We found that fair ranking algorithms certainly help across all job contexts, but their effectiveness in mitigating gender biases (prevalent in online recruiters) heavily depends on the nature of the job. [3/n]
If you have less than 3 hours to spare & want to learn (almost) everything about state-of-the-art explainable ML, this thread is for you! Below, I am sharing info about 4 of our recent tutorials on explainability presented at NeurIPS, AAAI, FAccT, and CHIL conferences. [1/n]
NeurIPS 2020: Our longest tutorial (2 hours 46 mins) discusses various types of explanation methods, their limitations, evaluation frameworks, applications to domains such as decision making/nlp/vision, and open problems explainml-tutorial.github.io/neurips20@sameer_@julius_adebayo [2/n]
AAAI 2021: Can't spend 2 hours 46 mins on this topic? No problem! Our tutorial at AAAI 2021 is right here (1 hour 32 mins): explainml-tutorial.github.io/aaai21. This one discusses different explanation methods, their limitations, evaluation, and open problems. @sameer_@julius_adebayo [3/n]
As I struggled to deal with the impact of COVID on my family members in India, I got delayed by a day for submitting my reviews for a conference & I got a message from a senior reviewer with the blurb below. My humble request to everyone - pls don't say this to anyone ever! [1/n]
I dont typically share any of my personal experiences on social media. But, I strongly felt that I need to make an exception this time. I am so incredibly hurt, appalled, flabbergasted, and dumbfounded by that blurb. It shows how academia can lack basic empathy! [2/n]
What bothers me is that I am an assistant professor at Harvard & I am decently known in my area of work. If someone can say this to me, I can't even imagine what they can say to a grad student. I am so sad this is the state of the research community that I am a part of! [3/n]