M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training
We present M3P, a Multitask Multilingual Multimodal Pre-trained model that combines multilingual pre-training and multimodal pre-training into a unified framework via multitask pre-training. Our goal is to learn universal representations that can map objects occurred in different modalities or texts...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present M3P, a Multitask Multilingual Multimodal Pre-trained model that
combines multilingual pre-training and multimodal pre-training into a unified
framework via multitask pre-training. Our goal is to learn universal
representations that can map objects occurred in different modalities or texts
expressed in different languages into a common semantic space. In addition, to
explicitly encourage fine-grained alignment between images and non-English
languages, we also propose Multimodal Code-switched Training (MCT) to combine
monolingual pre-training and multimodal pre-training via a code-switch
strategy. Experiments are performed on the multilingual image retrieval task
across two benchmark datasets, including MSCOCO and Multi30K. M3P can achieve
comparable results for English and new state-of-the-art results for non-English
languages. |
---|---|
DOI: | 10.48550/arxiv.2006.02635 |