#EMNLP2021 ends, but the Insights for Negative Results are coming tomorrow! The workshop is hybrid: virtual posters, talks by/for a mix of on-site & online speakers & attendees. Hosts: @JoaoSedoc @shabnamt1 @arumshisky @annargrs

Really proud of the program this year🧵:
8:45 Opening remarks
9:00 🗣️ Invited talk by Bonnie Webber: The Reviewers & the Reviewed: Institutional Memory & Institutional Incentives Image
10:00 💬🗨 Gathertown Poster session 1: Image
11:30 🗣️ Invited talk by @zacharylipton: Some Results on Label Shift & Label Noise Image
12:30 🖇 Thematic session: insights from negative results in translation Image
14:00 🗣️ Invited talk by @rctatman: Chatbots can be good: What we learn from unhappy users Image
15:30 💬🗨 Gathertown Poster session 2: Image
16:30 🖇 Thematic session: insights from negative results for BERT Image
17:00 🗣️ Invited talk by @nlpnoah: What Makes a Result Negative? Image
The zoom link for the invited talks and oral sessions is on the underline page. But that page is fairly useless as a schedule, so our program has direct links to all pre-recorded underline videos AND papers on aclanthology.
insights-workshop.github.io/2021/program/
Also, exciting news: this year Insights will have a best negative result paper award 🏆! The winner will be announced tomorrow, stay tuned.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Anna Rogers

Anna Rogers Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @annargrs

10 Nov
A few highlights from @nlpnoah's talk at insights-workshop.github.io:

@nlpnoah: NLP research and practice ask fundamentally different questions
/1 Image
@nlpnoah: NLP practice asks whether X improves the outcome. NLP research tries to fill in the gaps in the knowledge map.
/2 ImageImage
@nlpnoah: Leaderboards are the dominant frame for presenting research findings. That frame by its very nature puts the winner at the top, and un-focuses all of this:
/3 Image
Read 10 tweets
9 Nov
A highlight from #EMNLP2021 fascinating keynote by @StevenBird:
NLP often comes with a set of assumptions about what are the needs of communities with low-resource languages. But we need to learn what they *actually* need, they may have a completely different epistemology.
/1 Image
AR: this is such a thought-provoking talk, pointing at the missing bridges between language tech and social sciences, esp. anthropology. As a computational linguist lucky to spend a year in @CPH_SODAS - I still don't think I even see the depth of everything we're missing.
/2
An audience question (@bonadossou from @MasakhaneNLP?): how do we increase the volume of NLP research on low-resource languages when such work is not as incentivized?
@StevenBird: keep submitting. I've had many rejections. Theme track for ACL2022 will be language diversity.
/3
Read 4 tweets
4 May
Tired of paper pdfs? Brainstorm with us about the future of research communication at @rethinkmlpapers (@iclr_conf Friday)!
Talks & panel by David Ha, Terrence Parr @evelynevs @FalaahArifKhan @Hugo_larochelle @jeffbigham @lillian_weng @deviparikh

🧵 Some ideas from the program:
Read 18 tweets
1 May
🤦‍♀️ The only good thing about this is how much attention it attracted, so hopefully @icmlconf would reconsider.
/1
It can't even work, since peer review is only reliable for the clearly bad papers. Decisions on borderline papers are as good as random. This won't "raise the bar", it'll only reinforce the AC/SAC preferences. And likely improve the chances for preprinted papers by famous ppl.
/2
A paper on all of the above by @IAugenstein and yours truly:
aclweb.org/anthology/2020…
/3
Read 4 tweets
9 Oct 20
New paper📜: What Can We Do to Improve Peer Review in NLP?
arxiv.org/abs/2010.03863
with @IAugenstein

TLDR: In its current form, peer review is a poorly defined task with apples-to-oranges comparisons and unrealistic expectations. /1 Image
Reviewers resort to heuristics such as reject-if-not-SOTA to cope with uncertainty, so the only way to change that is to reduce uncertainty. Which is at least partly doable: better paper-reviewer matching, unambiguous eval criteria, fine-grained tracks, better review forms etc /2
Which criteria and forms, exactly? Each field has to find out for itself, through iterative development and experiments. Except that in NLP such work would be hard to publish, so there are no incentives to do it - and no mechanisms to test and compare any solutions. /3
Read 8 tweets
30 Aug 20
Preprint anonymity debate continues!

TLDR for those who missed the prior discussion: non-anonymous preprints systematically disadvantage the unknown labs and/or underrepresented communities.
My previous post: hackingsemantics.xyz/2020/anonymity/ /1
A new post by @ducha_aiki and @amy_tabb argues that fairness comes at a steep opportunity cost for the small labs. Full text here: amytabb.com/ts/2020_08_21/
/2
To summarize both posts, we have the following trade-off for the unknown/underrepresented authors:

* anonymous preprints: better acceptance chance;
* arXiv: lower acceptance chance, but more chances to try to promote unpublished work and get invited for talks and interviews.
/3
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(