Open Vocabulary Multi-Label Video Classification
Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pre-trained vision-language models (VLMs) have enabled significant progress
in open vocabulary computer vision tasks such as image classification, object
detection and image segmentation. Some recent works have focused on extending
VLMs to open vocabulary single label action classification in videos. However,
previous methods fall short in holistic video understanding which requires the
ability to simultaneously recognize multiple actions and entities e.g., objects
in the video in an open vocabulary setting. We formulate this problem as open
vocabulary multilabel video classification and propose a method to adapt a
pre-trained VLM such as CLIP to solve this task. We leverage large language
models (LLMs) to provide semantic guidance to the VLM about class labels to
improve its open vocabulary performance with two key contributions. First, we
propose an end-to-end trainable architecture that learns to prompt an LLM to
generate soft attributes for the CLIP text-encoder to enable it to recognize
novel classes. Second, we integrate a temporal modeling module into CLIP's
vision encoder to effectively model the spatio-temporal dynamics of video
concepts as well as propose a novel regularized finetuning technique to ensure
strong open vocabulary classification performance in the video domain. Our
extensive experimentation showcases the efficacy of our approach on multiple
benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2407.09073 |