Linguistic Knowledge-Aware Neural Machine Translation

Recently, researchers have shown an increasing interest in incorporating linguistic knowledge into neural machine translation (NMT). To this end, previous works choose either to alter the architecture of NMT encoder to incorporate syntactic information into the translation model, or to generalize th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2018-12, Vol.26 (12), p.2341-2354
Hauptverfasser: Li, Qiang, Wong, Derek F., Chao, Lidia S., Zhu, Muhua, Xiao, Tong, Zhu, Jingbo, Zhang, Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, researchers have shown an increasing interest in incorporating linguistic knowledge into neural machine translation (NMT). To this end, previous works choose either to alter the architecture of NMT encoder to incorporate syntactic information into the translation model, or to generalize the embedding layer of the encoder to encode additional linguistic features. The former approach mainly focuses on injecting the syntactic structure of the source sentence into the encoding process, leading to a complicated model that lacks the flexibility to incorporate other types of knowledge. The latter extends word embeddings by considering additional linguistic knowledge as features to enrich the word representation. It thus does not explicitly balance the contribution from word embeddings and the contribution from additional linguistic knowledge. To address these limitations, this paper proposes a knowledge-aware NMT approach that models additional linguistic features in parallel to the word feature. The core idea is that we propose modeling a series of linguistic features at the word level (knowledge block) using a recurrent neural network (RNN). And in sentence level, those word-corresponding feature blocks are further encoded using a RNN encoder. In decoding, we propose a knowledge gate and an attention gate to dynamically control the proportions of information contributing to the generation of target words from different sources. Extensive experiments show that our approach is capable of better accounting for importance of additional linguistic, and we observe significant improvements from 1.0 to 2.3 BLEU points on Chinese\leftrightarrow English and English\rightarrowGerman translation tasks.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2018.2864648