Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks
Foundational multimodal models pre-trained on large scale image-text pairs or video-text pairs or both have shown strong generalization abilities on downstream tasks. However unlike image-text models, pretraining video-text models is always not feasible due to the difficulty in collecting large-scal...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Foundational multimodal models pre-trained on large scale image-text pairs or
video-text pairs or both have shown strong generalization abilities on
downstream tasks. However unlike image-text models, pretraining video-text
models is always not feasible due to the difficulty in collecting large-scale
clean and aligned data, and exponential computational costs involved in the
pretraining phase. Therefore, the pertinent question to ask is: Can image-text
models be adapted to video tasks and is there any benefit to using these models
over pretraining directly on videos? In this work, we focus on this question by
proposing a detailed study on the generalization abilities of image-text models
when evaluated on video understanding tasks in a zero-shot setting. We
investigate 9 foundational image-text models on a diverse set of video tasks
that include video action recognition (video AR), video retrieval (video RT),
video question answering (video QA), video multiple choice (video MC) and video
captioning (video CP). Our experiments show that image-text models exhibit
impressive performance on video AR, video RT and video MC. Furthermore, they
perform moderately on video captioning and poorly on video QA. These findings
shed a light on the benefits of adapting foundational image-text models to an
array of video tasks while avoiding the costly pretraining step. |
---|---|
DOI: | 10.48550/arxiv.2310.04914 |