Verbs in Action: Improving verb understanding in video-language models
Understanding verbs is crucial to modelling how people and objects interact with each other and the environment through space and time. Recently, state-of-the-art video-language models based on CLIP have been shown to have limited verb understanding and to rely extensively on nouns, restricting thei...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Understanding verbs is crucial to modelling how people and objects interact
with each other and the environment through space and time. Recently,
state-of-the-art video-language models based on CLIP have been shown to have
limited verb understanding and to rely extensively on nouns, restricting their
performance in real-world video applications that require action and temporal
understanding. In this work, we improve verb understanding for CLIP-based
video-language models by proposing a new Verb-Focused Contrastive (VFC)
framework. This consists of two main components: (1) leveraging pretrained
large language models (LLMs) to create hard negatives for cross-modal
contrastive learning, together with a calibration strategy to balance the
occurrence of concepts in positive and negative pairs; and (2) enforcing a
fine-grained, verb phrase alignment loss. Our method achieves state-of-the-art
results for zero-shot performance on three downstream tasks that focus on verb
understanding: video-text matching, video question-answering and video
classification. To the best of our knowledge, this is the first work which
proposes a method to alleviate the verb understanding problem, and does not
simply highlight it. |
---|---|
DOI: | 10.48550/arxiv.2304.06708 |