The 2010s were an eventful decade for NLP! Here are ten shocking developments since 2010, and 13 papers* illustrating them, that have changed the field almost beyond recognition.
(* in the spirit of @iamtrask and @FelixHill84, exclusively from other groups :)).
Shock 1 (2010): Remember neural networks? They might be much more useful for NLP than we thought. Please learn about recurrent neural networks (RNNs, [1]) and Recursive neural networks (RNNs, [2]).
My postdoc, PhD student and RA have a cool new paper on arXiv, at CICLing this Tuesday at 12h10, & as poster on Tuesday at the MPI Nijmegen
Lisa Beinborn, Samira Abnar, Rochelle Choenni (2019): Robust evaluation of language-brain encoding experiments, arxiv.org/abs/1904.02547
1/5
It's part of a new research line, where we study how well neural language models allow us to predict brain activity -- *cool* because we learn more about the cognitive plausibility of our models!
Eg, in early 2018, we studied how well 8 word embeddings models predict fMRI.