Improving AMR parsing by exploiting the dependency parsing as an auxiliary task

meaning representations (AMRs) represent sentence semantics as rooted labeled directed acyclic graphs. Though there is a strong correlation between the AMR graph of a sentence and its corresponding dependency tree, the recent neural network AMR parsers do neglect the exploitation of dependency struc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2021-08, Vol.80 (20), p.30827-30838
Hauptverfasser: Wu, Taizhong, Zhou, Junsheng, Qu, Weiguang, Gu, Yanhui, Li, Bin, Zhong, Huilin, Long, Yunfei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:meaning representations (AMRs) represent sentence semantics as rooted labeled directed acyclic graphs. Though there is a strong correlation between the AMR graph of a sentence and its corresponding dependency tree, the recent neural network AMR parsers do neglect the exploitation of dependency structure information. In this paper, we explore a novel approach to exploiting dependency structures for AMR parsing. Unlike traditional pipeline models, we treat dependency parsing as an auxiliary task for AMR parsing under the multi-task learning framework by sharing neural network parameters and selectively extracting syntactic representation by the attention mechanism. Particularly, to balance the gradients and focus on the AMR parsing task, we present a new dynamical weighting scheme in the loss function. The experimental results on the LDC2015E86 and LDC2017T10 dataset show that our dependency-auxiliary AMR parser significantly outperforms the baseline and its pipeline counterpart, and demonstrate that the neural AMR parsers can be greatly boosted with the help of effective methods of integrating syntax.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-020-09967-3