Twitter might seem like a not-so-kind place especially if you are a young student who just had your paper rejected by #NeurIPS2020. You might be seeing all your peers/professors talking about their paper acceptances. Let me shed some light on the reality of the situation [1/N]
Twitter (and generally social media) paints a biased view of a lot of situations including this one thechicagoschool.edu/insight/from-t…. Looking at your twitter feed, you might be feeling that everyone else seems to have gotten their papers accepted except for you. That is so not true! [2/N]
#NeurIPS2020 has an acceptance rate of around 20% which means an overwhelming majority of the papers (80%) have been rejected. Also, a lot of the accepted papers might have already faced rejection(s) at other venues before being accepted at #NeurIPS2020. [3/N]
Clearly, people don’t talk about all their failures and rejections on social media as much as they talk about their successes. Please be mindful of this bias. Shout out to amazing researchers like @SethVNeel who did talk about their paper rejections [4/N]
A rejection doesn’t mean your work is bad. It is just a temporary setback. Hopefully you received useful feedback about what needs to be improved in your paper. Please take that feedback seriously and improve your research. Other deadlines are right around the corner! [5/N]
Just like any other process, the reviewing in ML conferences is far from perfect. There is a lot of randomness involved -- your paper acceptance might depend on your reviewer(s). Don’t believe me? Take a look at this experiment from NeurIPS 2017 bit.ly/3cKyxxv [6/N]
Many awesome papers get rejected all the time (sciencealert.com/these-8-papers…). It has happened to me a ton of times where papers I thought were amazing got rejected and the ones I thought were “meh” got accepted. It is important to acknowledge this no matter what the outcome. [7/N]
While this rejection might be looming large in your mind now, over time you will probably not even remember it. It really doesn’t matter in the grand scheme of things. What matters is you enjoying the work that you do and giving it your best. [8/N]
So, please do yourself a favor and develop interest/passion/love for the research you do and enjoy what you do. That alone will help you navigate all the uncertainties of the bureaucratic processes surrounding research in the long run. [9/N]
Rejection hurts but it is inevitable in every sphere of life including academia. It is a useful skill to learn to deal with it in a healthy way. Give yourself sometime to worry about it, feel bad about it, and obsess about it. But, then get over it and move on! [10/N]
Please do yourself a favor and consider turning off social media for few days and instead go talk to your friends and loved ones who actually genuinely care about you and your happiness. [11/N]
A parting thought for mentors: Students with accepted papers are probably already feeling pretty good about themselves — so they don’t need your validation now. But, the student whose paper has been rejected really needs your support at this time. So, please reach out! [N/N]

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞

𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @hima_lakkaraju

Apr 12
As we increasingly rely on #LLMs for product recommendations and searches, can companies game these models to enhance the visibility of their products?

Our latest work provides answers to this question & demonstrates that LLMs can be manipulated to boost product visibility!

Joint work with @AounonK. More details👇 [1/N]https://arxiv.org/abs/2404.07981
@AounonK @harvard_data @Harvard @D3Harvard @trustworthy_ml LLMs have become ubiquitous, and we are all increasingly relying on them for searches, product information, and recommendations. Given this, we ask a critical question for the first time: Can LLMs be manipulated by companies to enhance the visibility of their products? [2/N]
This question has huge implications for businesses: the ability to manipulate LLMs to enhance product visibility gives vendors a considerable competitive advantage and has the potential to disrupt fair market competition [3/N]
Read 14 tweets
May 2, 2023
Regulating #AI is important, but it can also be quite challenging in practice. Our #ICML2023 paper highlights the tensions between Right to Explanation & Right to be Forgotten, and proposes the first algorithmic framework to address these tensions arxiv.org/pdf/2302.04288… [1/N] Image
@SatyaIsIntoLLMs @Jiaqi_Ma_ Multiple regulatory frameworks (e.g., GDPR, CCPA) were introduced in recent years to regulate AI. Several of these frameworks emphasized the importance of enforcing two key principles ("Right to Explanation" and "Right to be Forgotten") in order to effectively regulate AI [2/N]
While Right to Explanation ensures that individuals who are adversely impacted by algorithmic outcomes are provided with an actionable explanation, Right to be Forgotten allows individuals to request erasure of their personal data from databases/models of an organization [3/N]
Read 14 tweets
Nov 26, 2022
Our group @ai4life_harvard is gearing up for showcasing our recent research and connecting with the #ML #TrustworthyML #XAI community at #NeurIPS2022. Here’s where you can find us at a glance. More details about our papers/talks/panels in the thread below 👇 [1/N] Image
@ai4life_harvard [Conference Paper] Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations (joint work with #TessaHan and @Suuraj) -- arxiv.org/abs/2206.01254. More details in this thread [2/N]
[Conference Paper] Efficient Training of Low-Curvature Neural Networks (joint work with @Suuraj, #KyleMatoba, @francoisfleuret) -- arxiv.org/abs/2206.07144. More details in this thread [3/N]
Read 10 tweets
Sep 18, 2022
One of the biggest criticisms of the field of post hoc #XAI is that each method "does its own thing", it is unclear how these methods relate to each other & which methods are effective under what conditions. Our #NeurIPS2022 paper provides (some) answers to these questions. [1/N]
In our #NeurIPS2022 paper, we unify eight different state-of-the-art local post hoc explanation methods, and show that they are all performing local linear approximations of the underlying models, albeit with different loss functions and notions of local neighborhoods. [2/N]
By doing so, we are able to explain the similarities & differences between these methods. These methods are similar in the sense that they all perform local linear approximations of models, but they differ considerably in "how" they perform these approximations [3/N]
Read 13 tweets
Jun 2, 2022
Explainable ML is a rather nascent field with lots of exciting work and discourse happening around it. But, it is very important to separate actual findings and results from hype. Below is a thread with some tips for navigating discourse and scholarship in explainable ML [1/N]
Overarching claims: We all have seen talks/tweets/discourse with snippets such as "explanations dont work" or "explanations are the answer to all these critical problems". When we hear such claims, they are often extrapolating results or findings from rather narrow studies. [2/N]
When we hear overarching claims, it is helpful to step back/ask what is the evidence to back such claims? which studies are such claims being based on? what is the context/application? how were the studies carried out? how reasonable is it to extrapolate these claims? [3/N]
Read 13 tweets
May 19, 2021
Excited to share our @AIESConf paper "Does Fair Ranking Improve Outcomes?: Understanding the Interplay of Human and Algorithmic Biases in Online Hiring". We investigate if fair ranking algorithms can mitigate gender biases in online hiring settings arxiv.org/pdf/2012.00423… [1/n]
More specifically, we were trying to examine the interplay between humans and fair ranking algorithms in online hiring settings, and assess if fair ranking algorithms can negate the effect of (any) gender biases prevalent in humans & ensure that the hiring outcomes are fair [2/n]
We found that fair ranking algorithms certainly help across all job contexts, but their effectiveness in mitigating gender biases (prevalent in online recruiters) heavily depends on the nature of the job. [3/n]
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(