Discover and read the best of Twitter Threads about #ACL2019nlp

Most recents (12)

#rep4nlp Yulia Tsvetkov talk #4

"Modeling Output spaces of NLP models" instead of the common #Bertology that focuses on Modeling input spaces only.

#ACL2019nlp
The focus in the presentation will be on Conditional language generation
#MT #summarization ..etc
"to be able to build diverse NLP models for 1000s of users we have to build 100ks of models for combinations of:

* Languages
* Tasks
* Domains
* People preferences
Read 19 tweets
Talk3: @raquelfdzrovira talking about representations shaped by dialogue interaction data.

#ACL2019nlp #rep4nlp
"Task-oriented dialogue" is the setup we are discussing now because it gives us success notion to the dialogue analyse
The plan:

Instead of pre-defined symbolic representations for dialogue systems lets model visually grounded agents that learn to "see, ask and guess"
Read 13 tweets
#rep4nlp Invited talk 2: @mohitban47 "Adversaially robust Representation Learning"

#acl2019nlp
@mohitban47 Adv. examples can break Reading comprehension systems.

"Adversarial Examples for Evaluating Reading Comprehension Systems" Jia and liang 2017
arxiv.org/abs/1707.07328
To fix that: "AddSentDiverse" a mod of AddSent (Jia et al. 2017), aimed at producing adversarial examples for robust training purposes based on rule-based semantic rules

Robust Machine Comprehension Models via Adversarial Training (NAACL2018 short)
arxiv.org/pdf/1804.06473…
Read 10 tweets
Marco Baroni is starting the first invited talk at #rep4nlp
"Language is representations by itself. " #ACL2019nlp @sigrep_acl
@sigrep_acl Marco is talking about his previous work about the emergence of language communication between agents.

References:

"MULTI-AGENT COOPERATION
AND THE EMERGENCE OF (NATURAL) LANGUAGE"
arxiv.org/pdf/1612.07182…

"How agents see things"
aclweb.org/anthology/D18-…

#ACL2019nlp
@sigrep_acl Efficient encoding of input information:

"We explore providing some information to the sender and receiver agents and look at the emerging language if it develops to ignore the redundant parts of the input"

Kharitonov et al. 2019
arxiv.org/abs/1905.13687

#ACLnlp2019 #rep4nlp
Read 13 tweets
@yuedongP et al. presenting: neural programmer interpreter model for sentence simplification.
MT methods for text simplification suffer from conservatism. meaning they simply less and copy more. This mainly because the high overlap between source and target data because of monolingualism
Neural programmer interpreter

Programmer: predict edit label (keep, delete, watch, add ..etc)

Interpreter: execute the predicted edit labels to generate the simplified text
Read 5 tweets
Highres: Reference-less Evaluation of summarisation

sashy narayan, hardy and @vlachos_nl
#acl2019nlp
Automatic single reference based evaluation of summarisation is biased for several reasons. human evaluation is not feasible.
To avoid disagreement between manual annotators: instead of asking them to write summaries they ask them to:

1) Manually highlight salient content in documents using 10 manual annotators (30 words max)

2) Evaluate summaries with respect to the highlighted summary.
Read 5 tweets
ACL keynote#1 Liang Huang: Simultaneous Translation (Machine Interpretation)

One of the main reasons for latency in Simultaneous Machine Translation is the problem of Word order (e.g. German verb comes at the end) #ACL2019nlp
Current solutions in industry at the moment was to translate sentence by sentence which will introduce some latency. Work in academia include methods that either anticipates the "German verb" on the source-side.
or RL to keep waiting for the german verb (Gu et al. 2017)
"prefix to prefix translation" is to pull the effort of the anticipation of the late verb on the LM of the decoder with a wait K policy which is very natural to what interpreters do in real life.
Read 5 tweets
Summarization#1 Makino et al. presenting: Global Optimization under Length Constraint (GOLC) for Neural Text Summarization

tldr; Minimum risk training(ROUGE, overlength Penalty)

overlength summaries from 19 (MLE) % to 6% (GOLC)
#acl2019nlp
paper: Global Optimization under Length Constraint for Neural Text Summarization
Takuya Makino, Tomoya Iwakura, Hiroya Takamura and Manabu Okumura
Read 3 tweets
Machine learning #1 #ACL2019nlp
Tau li et al. effectively augment NNs with logical constraints works to provide interpretability and control over some layers.
"If you have large datasets you should just believe in the data but with smaller datasets logical rules definitely help"
paper:
Augmenting Neural Networks with First-order Logic
Tao Li and Vivek Srikumar
arxiv.org/abs/1906.06298
Read 3 tweets
Tutorial 2: Story telling from structured data ans knowledge graphs #ACL2019nlp
Anirban laha @anirbanlaha Parag jain
#data2text #nlproc #NLG
@anirbanlaha Motivations for #Data2Text:
* Answer display in Question Answering systems
* kB summarization
* Question Generation
@anirbanlaha 4D perspective for #Data2Text
Paradigms, domain, tasks, Facets are 4 aspects of change.
Read 40 tweets
#ACL2019nlp is kicking off by "Latent structure models for NLP" ~ Andre F. T. Martins Tsvetomila Mihaylova @meloncholist @vnfrombucharest

The tutorial slides can be found here: deep-spin.github.io/tutorial/acl.p…

updates here πŸ‘‡πŸ‘‡

#ACL2019nlp
@meloncholist @vnfrombucharest Andre is starting with a motivational introduction about some structured prediction tasks (POS tagging, Dependency parsing, Word alignment)
@meloncholist @vnfrombucharest * #NLProc before (pipelines) and after (end to end)

*end to end models learn latent continuous vectors that are useful for downstream tasks. which might not be as interpretable as structured hidden representations.
Read 40 tweets
Niven & Kao's upcoming #acl2019nlp paper "Probing Neural Network Comprehension of Natural Language Arguments" asks exactly the right question of unreasonable performance: "what has BERT learned about argument comprehension?"

Preprint:
arxiv.org/abs/1907.07355

/1
They show, with careful experiments, that in fact, β€œBERT has learned nothing about argument comprehension.” Rather: β€œAs our learners get stronger, controlling for spurious statistics becomes more important in order to have confidence in their apparent performance.” /2
This kind of careful work, featuring careful attention to the data, is exactly what #NLProc needs more of! /3
Read 4 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!