Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history. However...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the power of large pretrained language models, various research works
have integrated knowledge into dialogue systems. The traditional techniques
treat knowledge as part of the input sequence for the dialogue system,
prepending a set of knowledge statements in front of dialogue history. However,
such a mechanism forces knowledge sets to be concatenated in an ordered manner,
making models implicitly pay imbalanced attention to the sets during training.
In this paper, we first investigate how the order of the knowledge set can
influence autoregressive dialogue systems' responses. We conduct experiments on
two commonly used dialogue datasets with two types of transformer-based models
and find that models view the input knowledge unequally. To this end, we
propose a simple and novel technique to alleviate the order effect by modifying
the position embeddings of knowledge input in these models. With the proposed
position embedding method, the experimental results show that each knowledge
statement is uniformly considered to generate responses. |
---|---|
DOI: | 10.48550/arxiv.2302.05888 |