Multi-Path Transformer is Better: A Case Study on Neural Machine Translation
For years the model performance in machine learning obeyed a power-law relationship with the model size. For the consideration of parameter efficiency, recent studies focus on increasing model depth rather than width to achieve better performance. In this paper, we study how model width affects the...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For years the model performance in machine learning obeyed a power-law
relationship with the model size. For the consideration of parameter
efficiency, recent studies focus on increasing model depth rather than width to
achieve better performance. In this paper, we study how model width affects the
Transformer model through a parameter-efficient multi-path structure. To better
fuse features extracted from different paths, we add three additional
operations to each sublayer: a normalization at the end of each path, a cheap
operation to produce more features, and a learnable weighted mechanism to fuse
all features flexibly. Extensive experiments on 12 WMT machine translation
tasks show that, with the same number of parameters, the shallower multi-path
model can achieve similar or even better performance than the deeper model. It
reveals that we should pay more attention to the multi-path structure, and
there should be a balance between the model depth and width to train a better
large-scale Transformer. |
---|---|
DOI: | 10.48550/arxiv.2305.05948 |