Relation Extraction with Explanation
ACL.2020 Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explainability. In thi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ACL.2020 Recent neural models for relation extraction with distant supervision
alleviate the impact of irrelevant sentences in a bag by learning importance
weights for the sentences. Efforts thus far have focused on improving
extraction accuracy but little is known about their explainability. In this
work we annotate a test set with ground-truth sentence-level explanations to
evaluate the quality of explanations afforded by the relation extraction
models. We demonstrate that replacing the entity mentions in the sentences with
their fine-grained entity types not only enhances extraction accuracy but also
improves explanation. We also propose to automatically generate "distractor"
sentences to augment the bags and train the model to ignore the distractors.
Evaluations on the widely used FB-NYT dataset show that our methods achieve new
state-of-the-art accuracy while improving model explainability. |
---|---|
DOI: | 10.48550/arxiv.2005.14271 |