Discover and read the best of Twitter Threads about #acl2020nlp

Most recents (12)

Some reflections on @emilymbender and @alkoller's #acl2020nlp paper on form and meaning, and an attempt to crystallize the ensuing debate: blog.julianmichael.org/2020/07/23/to-…
My take, roughly: there are good points on all sides, and I think we might be able to reconcile the main disagreements once we hash out the details (resolve misinterpretations, make assumptions more explicit, and give more examples). Though, doing so took me 8,000 words (oops).
More specifically: Many of the criticisms of the paper are based on viewing the octopus test as a Turing Test style diagnostic. Within this framing I think the criticisms are valid. But important impacts of the paper's claim apply outside this framing, and are valid as well.
Read 5 tweets
My first #ICML2020 was different from my n-th #acl2020nlp, but, or perhaps because of that, I did try to look for interesting papers that I could relate to but that might still teach me something new!

Papers, in roughly chronological order---each with a short summary :) [1/42]
“How Good is the Bayes Posterior in Deep Neural Networks Really?” (Florian Wenzel/@flwenz, Kevin Roth, @BasVeeling, Jakub Swiatkowsk, Linh Tran, @s_mandt, @JasperSnoek, @TimSalimans, @RJenatton, Sebastian Nowozin)

arxiv.org/abs/2002.02405


#ICML2020 [2/42]
[“How Good is the Bayes Posterior in Deep Neural Networks Really?” cont.]

As shown in @andrewgwils’ awesome tutorial, tempering works, probably because of bad priors?

#ICML2020 [3/42]
Read 43 tweets
On the transformer side of #acl2020nlp, three works stood out to me as relevant if you've followed the Illustrated Transformer/BERT series on my blog:
1- SpanBERT
2- BART
3- Quantifying Attention Flow
(1/n)
SpanBERT (by @mandarjoshi_ @danqi_chen @YinhanL @dsweld @LukeZettlemoyer @omerlevy_) came out last year but was published in this year's ACL. It found that BERT pre-training is better when you mask continuous strings of tokens, rather than BERT's 15% scattered tokens. ImageImage
BART (@ml_perception @YinhanL @gh_marjan @omerlevy_ @vesko_st @LukeZettlemoyer) presents a way to use what we've learned from BERT (and spanBERT) back into encoder-decoder models, which are especially important for summarization, machine translation, and chatbots. 3/n #acl2020nlp ImageImageImage
Read 6 tweets
Inspired by @yoavgo 's poll, I looked at the views for papers in three tracks -- Ethics, Summarization, and Theme (69 papers in total).

The median views per paper was 104.

In these three tracks, the most-viewed papers at time of writing are ... Image
1. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data by @emilymbender and @alkoller (961 views)

2. How Can We Accelerate Progress Towards Human-like Linguistic Generalization? by @tallinzen (410 views)
3. The Unstoppable Rise of Computational Linguistics in Deep Learning by @JamieBHenderson (356 views)

4. (Re)construing Meaning in NLP by @Sean_Trott @TorrentTiago @nancy_c_chang @complingy (291 views)
Read 6 tweets
Look, I appreciate the spirit of this work, but non-binary erasure shouldn't have any place at #acl2020nlp

This work makes my blood boil.

aclweb.org/anthology/2020…
NB folx are **not** a variable that you can just throw away for the sake of simplifying your analysis.

And don't get me started of gender labeling individual based on their names.

#acl2020nlp
yes, it's all acknowledged (in a footnote, we don't even deserve main body, apparently), but if your work requires to make all these assumptions, maybe just give up?

#acl2020nlp
Read 4 tweets
The #acl2020nlp best paper awards are about to be announced now!
Demo honorable mention (1):

Torch-Struct: Deep Structured Prediction Library
Alexander Rush
aclweb.org/anthology/2020…

#acl2020nlp
Demo honorable mention (2):

Prta: A System to Support the Analysis of Propaganda Techniques in the News
Giovanni Da San Martino, Shaden Shaar, Yifan Zhang, Seunghak Yu, Alberto Barrón-Cedeño and Preslav Nakov
aclweb.org/anthology/2020…

#acl2020nlp
Read 11 tweets
#acl2020nlp PSA: if you also find the rocket-chat UI showing the threads in both the thread window AND the main window to be unbearable,

Leonie found the solution!

Thanks Leonie!! Image
(update: she said she found this tip in a thread by @ojahnn )
(update2: actually the thread was by @EmmaSManning )
Read 3 tweets
And now, what will we be presenting tomorrow at #acl2020nlp? three papers (thread)
Work by former lab member @roeeaharoni (with very little involvement by me, i must say ;) ) on emergent domain clusters in pre-trained LMs and how we can use them in NMT:
virtual.acl2020.org/paper_main.692…
Work by @ravfogel @yanaiela @hila_gonen and Michael Twiton, on an iterative method to remove information from neural representations, with some guarantees. We apply this to gender bias, but the applicability is much broader imo.
Read 4 tweets
Today at #acl2020nlp is not over, but here are our sessions for tomorrow:
Alon's thoughtful comments on what it means to do interpretation "right", and what it means for an interpretation to be faithful.
Tomer Wolfson, @megamor2 , Ankit Gupta, @nlpmattg , Daniel Deutch and @JonathanBerant
on a cool benchmark for question understanding: decomposing complex questions into a series of simpler ones.
virtual.acl2020.org/paper_tacl.184…
Read 4 tweets
Thoughts on Kathy McKeown's #acl2020nlp keynote:
someone should go back and categorize all the "what's most important about deep learning" responses---literally nobody agrees! Accessibility, scalability, empirical performance, representations of lexical meaning, feature representations more broadly, support for new inputs, ...
Always good to be reminded that apart from Aravind, almost all of the "founding members" of what's now the NLP community were women: Spärck-Jones, Webber, Grosz, Hajicova.
Read 7 tweets
A bunch of works from my group(s) coming up at #acl2020nlp tomorrow. Watch the videos and come visit us in the Q&A sessions!
In work with @lambdaviking @gail_w @royschwartz02 @nlpnoah and @yahave we provide *theoretical* results (yes, with proofs) of things that can and cannot be represented by various kinds of RNNs, and under what conditions.
virtual.acl2020.org/paper_main.43.…

+ blog:
lambdaviking.com/post/rr-hierar…
If you are working in computational social sciences, digitial humanities, etc, check out the work with @hila_gonen , and Ganesh Jawahar and @zehavoc .

We present a *simple* and *effective* method for identifying word usage change across corpora.

virtual.acl2020.org/paper_main.51.…
Read 5 tweets
continuing the #acl2020nlp website tweaks, the random paper ordering is great for discoverability, but when going over a list of papers and marking what i want to follow up on, it is distracting to have the order change on me every time.

solution in next tweet.
in chrome, add a bookmark with the following content:
"""
javascript:allPapers = allPapers.sort((a,b) => (a.id < b.id) ? -1 : 1); render();
"""
(without quotes, naturally)

clicking it will re-order all displayed papers to a fixed order.
(the order is based on paper id, and not title/abstract/author/etc, to maintain the 'random' feel, + it was the shortest to write)
Read 3 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!