On Leveraging the Visual Modality for Neural Machine Translation
Leveraging the visual modality effectively for Neural Machine Translation (NMT) remains an open problem in computational linguistics. Recently, Caglayan et al. posit that the observed gains are limited mainly due to the very simple, short, repetitive sentences of the Multi30k dataset (the only multi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Leveraging the visual modality effectively for Neural Machine Translation
(NMT) remains an open problem in computational linguistics. Recently, Caglayan
et al. posit that the observed gains are limited mainly due to the very simple,
short, repetitive sentences of the Multi30k dataset (the only multimodal MT
dataset available at the time), which renders the source text sufficient for
context. In this work, we further investigate this hypothesis on a new large
scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times
longer mean sentence length than Multi30k and no repetition. We propose and
evaluate three novel fusion techniques, each of which is designed to ensure the
utilization of visual context at different stages of the Sequence-to-Sequence
transduction pipeline, even under full linguistic context. However, we still
obtain only marginal gains under full linguistic context and posit that visual
embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for
How2) do not lend themselves to increasing the discriminativeness between the
vocabulary elements at token level prediction in NMT. We demonstrate this
qualitatively by analyzing attention distribution and quantitatively through
Principal Component Analysis, arriving at the conclusion that it is the quality
of the visual embeddings rather than the length of sentences, which need to be
improved in existing MMT datasets. |
---|---|
DOI: | 10.48550/arxiv.1910.02754 |