VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
We propose a new two-stage pre-training framework for video-to-text generation tasks such as video captioning and video question answering: A generative encoder-decoder model is first jointly pre-trained on massive image-text data to learn fundamental vision-language concepts, and then adapted to vi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new two-stage pre-training framework for video-to-text
generation tasks such as video captioning and video question answering: A
generative encoder-decoder model is first jointly pre-trained on massive
image-text data to learn fundamental vision-language concepts, and then adapted
to video data in an intermediate video-text pre-training stage to learn
video-specific skills such as spatio-temporal reasoning. As a result, our
VideoOFA model achieves new state-of-the-art performance on four Video
Captioning benchmarks, beating prior art by an average of 9.7 points in CIDEr
score. It also outperforms existing models on two open-ended Video Question
Answering datasets, showcasing its generalization capability as a universal
video-to-text model. |
---|---|
DOI: | 10.48550/arxiv.2305.03204 |