XGPT: Cross-modal Generative Pre-Training for Image Captioning
While many BERT-based cross-modal pre-trained models produce excellent results on downstream understanding tasks like image-text retrieval and VQA, they cannot be applied to generation tasks directly. In this paper, we propose XGPT, a new method of Cross-modal Generative Pre-Training for Image Capti...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While many BERT-based cross-modal pre-trained models produce excellent
results on downstream understanding tasks like image-text retrieval and VQA,
they cannot be applied to generation tasks directly. In this paper, we propose
XGPT, a new method of Cross-modal Generative Pre-Training for Image Captioning
that is designed to pre-train text-to-image caption generators through three
novel generation tasks, including Image-conditioned Masked Language Modeling
(IMLM), Image-conditioned Denoising Autoencoding (IDA), and Text-conditioned
Image Feature Generation (TIFG). As a result, the pre-trained XGPT can be
fine-tuned without any task-specific architecture modifications to create
state-of-the-art models for image captioning. Experiments show that XGPT
obtains new state-of-the-art results on the benchmark datasets, including COCO
Captions and Flickr30k Captions. We also use XGPT to generate new image
captions as data augmentation for the image retrieval task and achieve
significant improvement on all recall metrics. |
---|---|
DOI: | 10.48550/arxiv.2003.01473 |