STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation
Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2024-10, Vol.PP, p.1-15 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder inputs are low-resolution due to computational cost, resulting in a loss of fine-grained information, sensitivity to background interference, and poor handling of small objects. Moreover, the queries are randomly initialized without location information, hindering convergence efficiency and accurate object instance localization. To address these issues, we propose a novel VIS approach, STFormer, with a spatial-temporal feature aggregation (STFA) module and spatial-temporal-aware Transformer (STT). Specifically, STFA obtains robust high-resolution masked features efficiently for the decoder, while STT's location-guided instance query (LGIQ) improves initial instance queries. STFormer preserves more fine-grained information, improves convergence efficiency, and localizes object instance features accurately. Extensive experiments on YouTube-VIS 2019, YouTube-VIS 2021, and OVIS datasets show that STFormer outperforms mainstream VIS methods. |
---|---|
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2024.3455551 |