Instance-Based Neural Dependency Parsing

Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt , where dependency edges are extracted and labeled by comparing them to edges in a training set. The...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Transactions of the Association for Computational Linguistics 2021-01, Vol.9, p.1493-1507
Hauptverfasser: Ouchi, Hiroki, Suzuki, Jun, Kobayashi, Sosuke, Yokoi, Sho, Kuribayashi, Tatsuki, Yoshikawa, Masashi, Inui, Kentaro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt , where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations.
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00439