Sentence embeddings (e.g., SBERT) are powerful -- but we just don't know what is crammed into a %&!$# vector π΅βπ«.
π₯So in our new paper, we use Abstract Meaning Representation (AMR) to make sentence embeddings more explainable! #AACL2022#nlproc#MachineLearning (1/3)
Interesting: Yes, we use AMR -- but we don't need an AMR parserπ€―. Therefore, we don't lose efficiency π. The accuracy π― is also preserved, and sometimes even improved (for argument similarity, we achieve a new state-of-the-art). (2/3)
(I'd like to quickly grab the opportunity here and mention @Nils_Reimers and @mdtux, since they definitely inspired me a bit with their cool works on sentence embeddings and AMR.)
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh