Our group @ai4life_harvard is gearing up for showcasing our recent research and connecting with the #ML #TrustworthyML #XAI community at #NeurIPS2022. Here’s where you can find us at a glance. More details about our papers/talks/panels in the thread below 👇 [1/N] Image
@ai4life_harvard [Conference Paper] Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations (joint work with #TessaHan and @Suuraj) -- arxiv.org/abs/2206.01254. More details in this thread [2/N]
[Conference Paper] Efficient Training of Low-Curvature Neural Networks (joint work with @Suuraj, #KyleMatoba, @francoisfleuret) -- arxiv.org/abs/2206.07144. More details in this thread [3/N]
[Conference Paper] OpenXAI: Towards a Transparent Evaluation of Model Explanations (joint work w/ @_cagarwal, @SatyaXploringAI, @eshikasax, @MartinPawelczyk, @narijohnson, #Isha, @marinkazitnik) -- arxiv.org/abs/2206.11104. More details in this thread [4/N]
[Invited Talk/Panel] I will be giving a talk on "A Brief History of Explainable AI: From Simple Rules to Large Pretrained Models" & participating in a panel @WiMLworkshop. My talk will give a brief overview of #XAI while highlighting our work in it. sites.google.com/view/wiml2022/… [5/N]
[Invited Talk] I will be giving a talk on "Does Model Understanding Improve Clinical Decision Making" @SymposiumML4H. My talk will showcase our user studies with healthcare professionals on effectiveness of existing #XAI tools & their wishlists. ml4health.github.io/2022/ [6/N]
[Workshop Paper] On the Impact of Adversarially Robust Models on Algorithmic Recourse (joint work with @SatyaXploringAI @_cagarwal) at TSRML workshop -- openreview.net/forum?id=qnSsY…. TLDR: When models are adversarially robust, corresponding recourses will be harder to implement [7/N]
[Workshop Paper] Rethinking Explainability as a Dialogue: A Practitioner’s Perspective (joint w/ @dylanslack20 @yuxinch @ChenhaoTan @sameer_) at HCAI workshop -- arxiv.org/abs/2202.01875. More details in this thread [8/N]
[Workshop Paper] TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations (joint w/ @dylanslack20 @SatyaXploringAI @sameer_) at TSRML workshop -- arxiv.org/abs/2207.04154. More details in this thread [9/N]
We look forward to meeting everyone in the @trustworthy_ml @XAI_Research and the broader ML community at #NeurIPS2022 #WiMLNeurIPS2022 @WiMLworkshop, and chatting more about our research and learning more about yours! Please drop by to say hi :) [N/N]

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞

𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @hima_lakkaraju

Apr 12
As we increasingly rely on #LLMs for product recommendations and searches, can companies game these models to enhance the visibility of their products?

Our latest work provides answers to this question & demonstrates that LLMs can be manipulated to boost product visibility!

Joint work with @AounonK. More details👇 [1/N]https://arxiv.org/abs/2404.07981
@AounonK @harvard_data @Harvard @D3Harvard @trustworthy_ml LLMs have become ubiquitous, and we are all increasingly relying on them for searches, product information, and recommendations. Given this, we ask a critical question for the first time: Can LLMs be manipulated by companies to enhance the visibility of their products? [2/N]
This question has huge implications for businesses: the ability to manipulate LLMs to enhance product visibility gives vendors a considerable competitive advantage and has the potential to disrupt fair market competition [3/N]
Read 14 tweets
May 2, 2023
Regulating #AI is important, but it can also be quite challenging in practice. Our #ICML2023 paper highlights the tensions between Right to Explanation & Right to be Forgotten, and proposes the first algorithmic framework to address these tensions arxiv.org/pdf/2302.04288… [1/N] Image
@SatyaIsIntoLLMs @Jiaqi_Ma_ Multiple regulatory frameworks (e.g., GDPR, CCPA) were introduced in recent years to regulate AI. Several of these frameworks emphasized the importance of enforcing two key principles ("Right to Explanation" and "Right to be Forgotten") in order to effectively regulate AI [2/N]
While Right to Explanation ensures that individuals who are adversely impacted by algorithmic outcomes are provided with an actionable explanation, Right to be Forgotten allows individuals to request erasure of their personal data from databases/models of an organization [3/N]
Read 14 tweets
Sep 18, 2022
One of the biggest criticisms of the field of post hoc #XAI is that each method "does its own thing", it is unclear how these methods relate to each other & which methods are effective under what conditions. Our #NeurIPS2022 paper provides (some) answers to these questions. [1/N]
In our #NeurIPS2022 paper, we unify eight different state-of-the-art local post hoc explanation methods, and show that they are all performing local linear approximations of the underlying models, albeit with different loss functions and notions of local neighborhoods. [2/N]
By doing so, we are able to explain the similarities & differences between these methods. These methods are similar in the sense that they all perform local linear approximations of models, but they differ considerably in "how" they perform these approximations [3/N]
Read 13 tweets
Jun 2, 2022
Explainable ML is a rather nascent field with lots of exciting work and discourse happening around it. But, it is very important to separate actual findings and results from hype. Below is a thread with some tips for navigating discourse and scholarship in explainable ML [1/N]
Overarching claims: We all have seen talks/tweets/discourse with snippets such as "explanations dont work" or "explanations are the answer to all these critical problems". When we hear such claims, they are often extrapolating results or findings from rather narrow studies. [2/N]
When we hear overarching claims, it is helpful to step back/ask what is the evidence to back such claims? which studies are such claims being based on? what is the context/application? how were the studies carried out? how reasonable is it to extrapolate these claims? [3/N]
Read 13 tweets
May 19, 2021
Excited to share our @AIESConf paper "Does Fair Ranking Improve Outcomes?: Understanding the Interplay of Human and Algorithmic Biases in Online Hiring". We investigate if fair ranking algorithms can mitigate gender biases in online hiring settings arxiv.org/pdf/2012.00423… [1/n]
More specifically, we were trying to examine the interplay between humans and fair ranking algorithms in online hiring settings, and assess if fair ranking algorithms can negate the effect of (any) gender biases prevalent in humans & ensure that the hiring outcomes are fair [2/n]
We found that fair ranking algorithms certainly help across all job contexts, but their effectiveness in mitigating gender biases (prevalent in online recruiters) heavily depends on the nature of the job. [3/n]
Read 8 tweets
May 7, 2021
If you have less than 3 hours to spare & want to learn (almost) everything about state-of-the-art explainable ML, this thread is for you! Below, I am sharing info about 4 of our recent tutorials on explainability presented at NeurIPS, AAAI, FAccT, and CHIL conferences. [1/n]
NeurIPS 2020: Our longest tutorial (2 hours 46 mins) discusses various types of explanation methods, their limitations, evaluation frameworks, applications to domains such as decision making/nlp/vision, and open problems explainml-tutorial.github.io/neurips20 @sameer_ @julius_adebayo [2/n]
AAAI 2021: Can't spend 2 hours 46 mins on this topic? No problem! Our tutorial at AAAI 2021 is right here (1 hour 32 mins): explainml-tutorial.github.io/aaai21. This one discusses different explanation methods, their limitations, evaluation, and open problems. @sameer_ @julius_adebayo [3/n]
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(