What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
Spatio-temporal grounding describes the task of localizing events in space and time, e.g., in video data, based on verbal descriptions only. Models for this task are usually trained with human-annotated sentences and bounding box supervision. This work addresses this task from a multimodal supervisi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Spatio-temporal grounding describes the task of localizing events in space
and time, e.g., in video data, based on verbal descriptions only. Models for
this task are usually trained with human-annotated sentences and bounding box
supervision. This work addresses this task from a multimodal supervision
perspective, proposing a framework for spatio-temporal action grounding trained
on loose video and subtitle supervision only, without human annotation. To this
end, we combine local representation learning, which focuses on leveraging
fine-grained spatial information, with a global representation encoding that
captures higher-level representations and incorporates both in a joint
approach. To evaluate this challenging task in a real-life setting, a new
benchmark dataset is proposed providing dense spatio-temporal grounding
annotations in long, untrimmed, multi-action instructional videos for over 5K
events. We evaluate the proposed approach and other methods on the proposed and
standard downstream tasks showing that our method improves over current
baselines in various settings, including spatial, temporal, and untrimmed
multi-action spatio-temporal grounding. |
---|---|
DOI: | 10.48550/arxiv.2303.16990 |