What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Humans use language to refer to entities in the external world. Motivated by this, in recent years several models that incorporate a bias towards learning entity representations have been proposed. Such entity-centric models have shown empirical success, but we still know little about why. In this p...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans use language to refer to entities in the external world. Motivated by
this, in recent years several models that incorporate a bias towards learning
entity representations have been proposed. Such entity-centric models have
shown empirical success, but we still know little about why. In this paper we
analyze the behavior of two recently proposed entity-centric models in a
referential task, Entity Linking in Multi-party Dialogue (SemEval 2018 Task 4).
We show that these models outperform the state of the art on this task, and
that they do better on lower frequency entities than a counterpart model that
is not entity-centric, with the same model size. We argue that making models
entity-centric naturally fosters good architectural decisions. However, we also
show that these models do not really build entity representations and that they
make poor use of linguistic context. These negative results underscore the need
for model analysis, to test whether the motivations for particular
architectures are borne out in how models behave when deployed. |
---|---|
DOI: | 10.48550/arxiv.1905.06649 |