Tired of paper pdfs? Brainstorm with us about the future of research communication at @rethinkmlpapers (@iclr_conf Friday)!
Talks & panel by David Ha, Terrence Parr @evelynevs @FalaahArifKhan @Hugo_larochelle @jeffbigham @lillian_weng @deviparikh

🧵 Some ideas from the program:
The workshop is organized by
@_krishna_murthy Bhairav Metha, @Breandan @amy_tabb @Khimya @annargrs @adityakusupati @sarahookr @tegan_maharaj @deviparikh @DerekRenderling @PoloChau & Yoshua Bengio. Hope to see you there!

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Anna Rogers

Anna Rogers Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @annargrs

1 May
🤦‍♀️ The only good thing about this is how much attention it attracted, so hopefully @icmlconf would reconsider.
It can't even work, since peer review is only reliable for the clearly bad papers. Decisions on borderline papers are as good as random. This won't "raise the bar", it'll only reinforce the AC/SAC preferences. And likely improve the chances for preprinted papers by famous ppl.
A paper on all of the above by @IAugenstein and yours truly:
Read 4 tweets
9 Oct 20
New paper📜: What Can We Do to Improve Peer Review in NLP?
with @IAugenstein

TLDR: In its current form, peer review is a poorly defined task with apples-to-oranges comparisons and unrealistic expectations. /1 Image
Reviewers resort to heuristics such as reject-if-not-SOTA to cope with uncertainty, so the only way to change that is to reduce uncertainty. Which is at least partly doable: better paper-reviewer matching, unambiguous eval criteria, fine-grained tracks, better review forms etc /2
Which criteria and forms, exactly? Each field has to find out for itself, through iterative development and experiments. Except that in NLP such work would be hard to publish, so there are no incentives to do it - and no mechanisms to test and compare any solutions. /3
Read 8 tweets
30 Aug 20
Preprint anonymity debate continues!

TLDR for those who missed the prior discussion: non-anonymous preprints systematically disadvantage the unknown labs and/or underrepresented communities.
My previous post: hackingsemantics.xyz/2020/anonymity/ /1
A new post by @ducha_aiki and @amy_tabb argues that fairness comes at a steep opportunity cost for the small labs. Full text here: amytabb.com/ts/2020_08_21/
To summarize both posts, we have the following trade-off for the unknown/underrepresented authors:

* anonymous preprints: better acceptance chance;
* arXiv: lower acceptance chance, but more chances to try to promote unpublished work and get invited for talks and interviews.
Read 10 tweets
3 Jul 20
#NLProc #metatweet for @emnlp2020 workshops! Check it out:

* if you missed @coling2020 deadline😉
* if you have any questions: we linked to announcement threads!
* to find folks to follow in your field: we tried to tag all the organizers!

Joint effort with @fblain.
2. Workshop on Insights from Negative Results in NLP (#NLPInsights20)
Organisers: @annargrs @JoaoSedoc @arumshisky
Deadline: August 15, 2020

Read 25 tweets
13 Jun 20
I really enjoyed this episode of #nlphighlights with @earnmyturns. It is about managing industry research teams, but also generally about incentives in research and the need for intellectual diversity.


A few highlights from the highlights: /1
If hiring decisions are guided by the number of ACL/NeurIPS papers, you will hire essentially the same person over and over again: probably CS background, from a top US school, white, male, with the means to ignore everything for the sake of *ACL deadlines for a few years. /2
With more of the same kind of people, you will be doing incremental improvements to the same thing you're already doing - instead of trying to do smth radically better. That would requires intellectual diversity, so hiring managers should be casting their net wider. /3
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!