Tree-Structured Semantic Encoder with Knowledge Sharing for Domain Adaptation in Natural Language Generation
Domain adaptation in natural language generation (NLG) remains challenging because of the high complexity of input semantics across domains and limited data of a target domain. This is particularly the case for dialogue systems, where we want to be able to seamlessly include new domains into the con...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Domain adaptation in natural language generation (NLG) remains challenging
because of the high complexity of input semantics across domains and limited
data of a target domain. This is particularly the case for dialogue systems,
where we want to be able to seamlessly include new domains into the
conversation. Therefore, it is crucial for generation models to share knowledge
across domains for the effective adaptation from one domain to another. In this
study, we exploit a tree-structured semantic encoder to capture the internal
structure of complex semantic representations required for multi-domain
dialogues in order to facilitate knowledge sharing across domains. In addition,
a layer-wise attention mechanism between the tree encoder and the decoder is
adopted to further improve the model's capability. The automatic evaluation
results show that our model outperforms previous methods in terms of the BLEU
score and the slot error rate, in particular when the adaptation data is
limited. In subjective evaluation, human judges tend to prefer the sentences
generated by our model, rating them more highly on informativeness and
naturalness than other systems. |
---|---|
DOI: | 10.48550/arxiv.1910.06719 |