Discover and read the best of Twitter Threads about #ICLR2020

Most recents (8)

Compared to virtual #ICLR2020, I found virtual #ICML2020 lacks a few features.

1. Papers are scheduled for 2 days. instead of 1. I prefer to collect all papers of interest for the day and then attend only those. Now I need to keep in mind if I attended a poster previously.

1/n
2. Videos are 10-15 minutes long. This forces me to *really* want to attend the poster. Having tl;dr version (<5 min) or short/long videos as in ICLR is preferred.
3. 1 hour of the poster session is often not enough for the amount of content proposed. In session 1, I have 5+ posters, 15 minutes each. It physically doesn't fit into the slot.
Read 4 tweets
With @iclr_conf #ICLR2020 over and a bit of sleep under my belt, I'd like to give my short summary of a truly great event---and offer a list of the papers I enjoyed seeing (for those who are into that kind of thing).
In general, I feel lucky to live in a time where we have venues like these full of really interesting papers on the intersection between NLP and ML (and others, but that's what I personally am most into, so my experience is biased).
First off, echoing what everyone else concluded: the website was great. For those who didn't attend, I hope you'll get to see it soon. Having a prerecorded 5-minute talk for each paper along with the slides you could click through made for excellent paper browsing in my mind:
Read 47 tweets
this paper from @shaohua0116 on guiding RL agents with program counterparts of natural language instructions was one of my favourites at #ICLR2020. here is why i think it's exciting and quite different from existing work. 1/6
there's a large literature of #NLProc work on semantic parsing (converting language->executable meaning representations) for a variety of tasks. this is helpful e.g., for database operations, for goals/rewards for planners, to ground to predefined robotic actions etc. 2/6
apart from select works, a lot of the time, the programs are treated as static---their executions are pre-defined, are usually used once at some beginning/end-point (e.g. to produce a goal state for some RL algorithm/planner) and do not extend over time or with interactions. 3/6
Read 6 tweets
here is work at #ICLR2020 by @FelixHill84 @AndrewLampinen @santoroAI + coauthors that nicely verifies (a subset of) the things put forth by several #NLProc position papers on language/grounding/meaning/form. some insights and reasons to read this paper: 1/5
this deals with richer 3D (rather than 2D) environments that allow systematic evaluation of phenomena against a richer variety of stimuli (here, only visual, but can extend to sensory information e.g., action effects, deformation, touch, sound + more realistic simulators). 2/5
unlike a lot of previous work, the tasks are not only navigation (whether over discrete / continuous spaces) but involve manipulation, positioning, gaze (through visual rays) which are far more complex motor activities. 3/5
Read 5 tweets
Survey of #MachineLearning experimental methods (aka "how do ML folks do their experiments") at #NeurIPS2019 and #ICLR2020, a thread of results:
1. "Did you have any experiments in your paper?"

The future is empirical! If we historically look at NeurIPS papers (not just 2019), the number of theoretical submissions is dwindling and now almost relegated to conferences like UAI, and that's unfortunate.
side note: There was a time when folks used to say, "what experiments? It's a NIPS paper!" (also, I am a dinosaur).
Read 13 tweets
Recent studies have suggested that the earliest iterations of DNN training are especially critical. In our #ICLR2020 paper with @jefrankle and @davidjschwab, we use the lottery ticket framework to rigorously examine this crucial phase of training.

arxiv.org/abs/2002.10365
@jefrankle @davidjschwab Existing methods can't find winning lottery tickets at init on larger networks. Instead, they only seem to emerge early in training. We exploit this in our experiments by as a causal way to measure the impact of various network properties on this early phase of training.
@jefrankle @davidjschwab First, we found that it is possible to reinitialize lottery tickets of small networks if weights keep the same signs (Zhou et al., 2019), these results do not appear hold on more complex models when we use weights from early in training.
Read 7 tweets
#ICLR2020 "after the paper submission deadline, if an author chooses to withdraw, it will remain hosted by OpenReview in a publicly visible "withdrawn papers" section. Like on arXiv, submissions to ICLR cannot be deleted. Withdrawn papers will be de-anonymized right away."
we have noticed a few authors have exploited a loophole in the openreview system. we are already on these cases and will revert back those submissions to show the original title, abstract and author list.
while fixing this loophole, we have currently blocked the option to withdraw your submission but will re-enable it shortly. @openreviewnet @iclr_conf
Read 4 tweets
Some metadata for those curious about their #ICLR2020 reviews.

1. Histogram of the average reviews.
2. Top x% deciles

Seems like reviews this year at @iclr_conf are substantially lower than previous years. Probably an artifact of the new [1,3,6,8] reviewing system. (1/n)
@iclr_conf For experience:
Out of 7583 total #ICLR2020 reviews:
1078 "do not know much about this area"
2484 "have read many papers in this area"
2604 "have published 1 or 2 papers"
1417 "have published in this field for many years"

47% of reviews haven't published in this area!
@iclr_conf For thoroughness:
601 "made a quick assessment of the paper"
4099 "read the paper at least twice and used their best judgement"
2698 "read the paper thoroughly"

(3/n)
Read 5 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!