Discover and read the best of Twitter Threads about #naacl2019

Most recents (2)

Now live-tweeting The Enigma of Neural Text Degeneration as the First Defense Against Neural Fake News by Yejin Choi #neuralgen2019 #naacl2019
Motivating with super funny fake news created which tells she is a co founder of a self driving icecream truck lmao
Neural Fake news is here 😱😱😱😱
Read 36 tweets
Does my unsupervised neural network learn syntax? In new #NAACL2019 paper with @chrmanning, our "structural probe" can show that your word representations embed entire parse trees.

paper: nlp.stanford.edu/pubs/hewitt201…
blog: nlp.stanford.edu/~johnhew/struc…
code: github.com/john-hewitt/st…
1/4
@chrmanning Key idea: Vector spaces have distance metrics (L2); trees do too (# edges between words). Vector spaces have norms (L2); rooted trees do too (# edges between word and ROOT.) Our probe finds a vector distance/norm on word representations that matches all tree distances/norms 2/4
These distances/norms reconstruct each tree, and are parametrized only by a single linear transformation. What does this mean? In BERT, ELMo, we find syntax trees approximately embedded as a global property of the transformed vector space. (But not in baselines!) 3/4
Read 5 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!