LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation
Referring video object segmentation (RVOS) aims to segment the target instance referred by a given text expression in a video clip. The text expression normally contains sophisticated description of the instance's appearance, action, and relation with others. It is therefore rather difficult fo...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Referring video object segmentation (RVOS) aims to segment the target
instance referred by a given text expression in a video clip. The text
expression normally contains sophisticated description of the instance's
appearance, action, and relation with others. It is therefore rather difficult
for a RVOS model to capture all these attributes correspondingly in the video;
in fact, the model often favours more on the action- and relation-related
visual attributes of the instance. This can end up with partial or even
incorrect mask prediction of the target instance. We tackle this problem by
taking a subject-centric short text expression from the original long text
expression. The short one retains only the appearance-related information of
the target instance so that we can use it to focus the model's attention on the
instance's appearance. We let the model make joint predictions using both long
and short text expressions; and insert a long-short cross-attention module to
interact the joint features and a long-short predictions intersection loss to
regulate the joint predictions. Besides the improvement on the linguistic part,
we also introduce a forward-backward visual consistency loss, which utilizes
optical flows to warp visual features between the annotated frames and their
temporal neighbors for consistency. We build our method on top of two state of
the art pipelines. Extensive experiments on A2D-Sentences, Refer-YouTube-VOS,
JHMDB-Sentences and Refer-DAVIS17 show impressive improvements of our
method.Code is available at https://github.com/LinfengYuan1997/Losh. |
---|---|
DOI: | 10.48550/arxiv.2306.08736 |