TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Audio Visual Scene-aware Dialog (AVSD) is a task to generate responses when discussing about a given video. The previous state-of-the-art model shows superior performance for this task using Transformer-based architecture. However, there remain some limitations in learning better representation of m...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Audio Visual Scene-aware Dialog (AVSD) is a task to generate responses when
discussing about a given video. The previous state-of-the-art model shows
superior performance for this task using Transformer-based architecture.
However, there remain some limitations in learning better representation of
modalities. Inspired by Neural Machine Translation (NMT), we propose the
Transformer-based Modal Translator (TMT) to learn the representations of the
source modal sequence by translating the source modal sequence to the related
target modal sequence in a supervised manner. Based on Multimodal Transformer
Networks (MTN), we apply TMT to video and dialog, proposing MTN-TMT for the
video-grounded dialog system. On the AVSD track of the Dialog System Technology
Challenge 7, MTN-TMT outperforms the MTN and other submission models in both
Video and Text task and Text Only task. Compared with MTN, MTN-TMT improves all
metrics, especially, achieving relative improvement up to 14.1% on CIDEr. Index
Terms: multimodal learning, audio-visual scene-aware dialog, neural machine
translation, multi-task learning |
---|---|
DOI: | 10.48550/arxiv.2010.10839 |