ActionVOS: Actions as Prompts for Video Object Segmentation
Delving into the realm of egocentric vision, the advancement of referring video object segmentation (RVOS) stands as pivotal in understanding human activities. However, existing RVOS task primarily relies on static attributes such as object names to segment target objects, posing challenges in disti...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Delving into the realm of egocentric vision, the advancement of referring
video object segmentation (RVOS) stands as pivotal in understanding human
activities. However, existing RVOS task primarily relies on static attributes
such as object names to segment target objects, posing challenges in
distinguishing target objects from background objects and in identifying
objects undergoing state changes. To address these problems, this work proposes
a novel action-aware RVOS setting called ActionVOS, aiming at segmenting only
active objects in egocentric videos using human actions as a key language
prompt. This is because human actions precisely describe the behavior of
humans, thereby helping to identify the objects truly involved in the
interaction and to understand possible state changes. We also build a method
tailored to work under this specific setting. Specifically, we develop an
action-aware labeling module with an efficient action-guided focal loss. Such
designs enable ActionVOS model to prioritize active objects with existing
readily-available annotations. Experimental results on VISOR dataset reveal
that ActionVOS significantly reduces the mis-segmentation of inactive objects,
confirming that actions help the ActionVOS model understand objects'
involvement. Further evaluations on VOST and VSCOS datasets show that the novel
ActionVOS setting enhances segmentation performance when encountering
challenging circumstances involving object state changes. We will make our
implementation available at https://github.com/ut-vision/ActionVOS. |
---|---|
DOI: | 10.48550/arxiv.2407.07402 |