Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021) Machine translation has seen rapid progress with the advent of Transformer-based models. These models have no explicit linguistic structure built into them, yet they may still implicitly learn structured r...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the 4th Workshop on Technologies for MT of Low
Resource Languages (LoResMT2021) Machine translation has seen rapid progress with the advent of
Transformer-based models. These models have no explicit linguistic structure
built into them, yet they may still implicitly learn structured relationships
by attending to relevant tokens. We hypothesize that this structural learning
could be made more robust by explicitly endowing Transformers with a structural
bias, and we investigate two methods for building in such a bias. One method,
the TP-Transformer, augments the traditional Transformer architecture to
include an additional component to represent structure. The second method
imbues structure at the data level by segmenting the data with morphological
tokenization. We test these methods on translating from English into
morphologically rich languages, Turkish and Inuktitut, and consider both
automatic metrics and human evaluations. We find that each of these two
approaches allows the network to achieve better performance, but this
improvement is dependent on the size of the dataset. In sum, structural
encoding methods make Transformers more sample-efficient, enabling them to
perform better from smaller amounts of data. |
---|---|
DOI: | 10.48550/arxiv.2208.06061 |