🚨 Our #ICLR2021 paper shows that KG-augmented models are surprisingly robust to KG perturbation! 🧐

arXiv: arxiv.org/abs/2010.12872
Code: github.com/INK-USC/deceiv…

To learn more, come find us at Poster Session 9 (May 5, 5-7PM PDT): iclr.cc/virtual/2021/p….

🧵[1/n]
KGs have helped neural models perform better on knowledge-intensive tasks and even “explain” their predictions, but are KG-augmented models really using KGs in a way that makes sense to humans?

[2/n]
We primarily investigate this question by measuring how the performance of KG-augmented models changes when the KG’s semantics and/or structure are perturbed, such that the KG becomes less human-comprehensible.

[3/n]
If the KG has been greatly perturbed in this manner, then a “human-like” KG-augmented model should achieve much worse performance with the perturbed KG than with the original KG.

[4/n]
We propose four heuristics (RS, RR, ER, ED) and one RL algorithm (RL-RR) for perturbing the semantics and/or structure of the KG. Unlike the heuristics, RL-RR aims to maximize the downstream performance of models using the perturbed KG.

[5/n]
Interestingly, for both commonsense QA and item recommendation, the KG can be extensively perturbed with little to no effect on KG-augmented models’ performance! Here, we show results for KGs perturbed using RL-RR.

[6/n]
Plus, we find that original KG and perturbed KG (RL-RR) paths successfully utilized by KG-augmented models are hard for humans to read or use. This suggests that models and humans process KG info differently.

[7/n]
These findings raise doubts about the role of KGs in KG-augmented models and the plausibility of KG-based explanations. We hope that our paper can help guide future work in designing KG-augmented models that perform and explain better.

[8/n]
This work was led by @MrigankRaman, an undergrad intern at INK Lab (inklab.usc.edu). Many thanks to all of our co-authors: Siddhant Agarwal, Peifeng Wang, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, and @xiangrenNLP.

[9/n]
Oops, adding Siddhant's Twitter handle here too: @agsidd10.

[10/n]

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Aaron Chan

Aaron Chan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!