A recap of my past decade: 1. Started doing research 2. Wrote a bunch of papers and collaborated with some awesome people 3. Got some external recognition for my work e.g., MIT Tech Review 35 Under 35
[1/n]
4. Relocated from India to Bay area 5. Relocated from Bay area to Boston 6. Started and finished my PhD 7. Survived major health situations 8. Accepted my first faculty job (will start on 1/1/2020 - yayy!) 9. Taught my first ever (full fledged) course
[2/n]
10. Met and dealt with a lot of people in the world (and in academia) who inspired me to do better professionally and personally 11. Met a lot of other people too because of whom I lost faith in humanity (DM me to know the full list :p)
[3/n]
12. Experienced loneliness, pain, dissatisfaction, burn out. Felt inadequate, unappreciated, and discriminated against. 13. Learned that life is often far from perfect and the only way to live is to make the best of the cards that you are handed, and just have a good time
[4/n]
14. Had a lot of fun and happy memories too, ofcourse :) -- e.g., saw tears of joy in my parents eyes during my PhD convocation. 15. Married my best friend 16. Bought a house
All in all, I am ready to step into the next decade (as a somewhat more cynical adult).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As we increasingly rely on #LLMs for product recommendations and searches, can companies game these models to enhance the visibility of their products?
Our latest work provides answers to this question & demonstrates that LLMs can be manipulated to boost product visibility!
Joint work with @AounonK. More details👇 [1/N]
@AounonK @harvard_data @Harvard @D3Harvard @trustworthy_ml LLMs have become ubiquitous, and we are all increasingly relying on them for searches, product information, and recommendations. Given this, we ask a critical question for the first time: Can LLMs be manipulated by companies to enhance the visibility of their products? [2/N]
This question has huge implications for businesses: the ability to manipulate LLMs to enhance product visibility gives vendors a considerable competitive advantage and has the potential to disrupt fair market competition [3/N]
Regulating #AI is important, but it can also be quite challenging in practice. Our #ICML2023 paper highlights the tensions between Right to Explanation & Right to be Forgotten, and proposes the first algorithmic framework to address these tensions arxiv.org/pdf/2302.04288… [1/N]
@SatyaIsIntoLLMs@Jiaqi_Ma_ Multiple regulatory frameworks (e.g., GDPR, CCPA) were introduced in recent years to regulate AI. Several of these frameworks emphasized the importance of enforcing two key principles ("Right to Explanation" and "Right to be Forgotten") in order to effectively regulate AI [2/N]
While Right to Explanation ensures that individuals who are adversely impacted by algorithmic outcomes are provided with an actionable explanation, Right to be Forgotten allows individuals to request erasure of their personal data from databases/models of an organization [3/N]
Our group @ai4life_harvard is gearing up for showcasing our recent research and connecting with the #ML#TrustworthyML#XAI community at #NeurIPS2022. Here’s where you can find us at a glance. More details about our papers/talks/panels in the thread below 👇 [1/N]
@ai4life_harvard [Conference Paper] Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations (joint work with #TessaHan and @Suuraj) -- arxiv.org/abs/2206.01254. More details in this thread
One of the biggest criticisms of the field of post hoc #XAI is that each method "does its own thing", it is unclear how these methods relate to each other & which methods are effective under what conditions. Our #NeurIPS2022 paper provides (some) answers to these questions. [1/N]
In our #NeurIPS2022 paper, we unify eight different state-of-the-art local post hoc explanation methods, and show that they are all performing local linear approximations of the underlying models, albeit with different loss functions and notions of local neighborhoods. [2/N]
By doing so, we are able to explain the similarities & differences between these methods. These methods are similar in the sense that they all perform local linear approximations of models, but they differ considerably in "how" they perform these approximations [3/N]
Explainable ML is a rather nascent field with lots of exciting work and discourse happening around it. But, it is very important to separate actual findings and results from hype. Below is a thread with some tips for navigating discourse and scholarship in explainable ML [1/N]
Overarching claims: We all have seen talks/tweets/discourse with snippets such as "explanations dont work" or "explanations are the answer to all these critical problems". When we hear such claims, they are often extrapolating results or findings from rather narrow studies. [2/N]
When we hear overarching claims, it is helpful to step back/ask what is the evidence to back such claims? which studies are such claims being based on? what is the context/application? how were the studies carried out? how reasonable is it to extrapolate these claims? [3/N]
Excited to share our @AIESConf paper "Does Fair Ranking Improve Outcomes?: Understanding the Interplay of Human and Algorithmic Biases in Online Hiring". We investigate if fair ranking algorithms can mitigate gender biases in online hiring settings arxiv.org/pdf/2012.00423… [1/n]
More specifically, we were trying to examine the interplay between humans and fair ranking algorithms in online hiring settings, and assess if fair ranking algorithms can negate the effect of (any) gender biases prevalent in humans & ensure that the hiring outcomes are fair [2/n]
We found that fair ranking algorithms certainly help across all job contexts, but their effectiveness in mitigating gender biases (prevalent in online recruiters) heavily depends on the nature of the job. [3/n]