STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation

Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-10, Vol.PP, p.1-15
Hauptverfasser: Li, Hao, Wang, Wei, Wang, Mengzhu, Tan, Huibin, Lan, Long, Luo, Zhigang, Liu, Xinwang, Li, Kenli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15
container_issue
container_start_page 1
container_title IEEE transaction on neural networks and learning systems
container_volume PP
creator Li, Hao
Wang, Wei
Wang, Mengzhu
Tan, Huibin
Lan, Long
Luo, Zhigang
Liu, Xinwang
Li, Kenli
description Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder inputs are low-resolution due to computational cost, resulting in a loss of fine-grained information, sensitivity to background interference, and poor handling of small objects. Moreover, the queries are randomly initialized without location information, hindering convergence efficiency and accurate object instance localization. To address these issues, we propose a novel VIS approach, STFormer, with a spatial-temporal feature aggregation (STFA) module and spatial-temporal-aware Transformer (STT). Specifically, STFA obtains robust high-resolution masked features efficiently for the decoder, while STT's location-guided instance query (LGIQ) improves initial instance queries. STFormer preserves more fine-grained information, improves convergence efficiency, and localizes object instance features accurately. Extensive experiments on YouTube-VIS 2019, YouTube-VIS 2021, and OVIS datasets show that STFormer outperforms mainstream VIS methods.
doi_str_mv 10.1109/TNNLS.2024.3455551
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TNNLS_2024_3455551</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10721237</ieee_id><sourcerecordid>3117993273</sourcerecordid><originalsourceid>FETCH-LOGICAL-c205t-fe726011ca214d73c505173049e0bf7b6b092e4f73de882348ac26392a0e55f03</originalsourceid><addsrcrecordid>eNpNkEtLw0AUhQdRbKn9AyKSpZvUeSWTcVeK1UKoi0RxN0ySG4nk5UyC-O-dPizezbmL75zFh9A1wQtCsLxPt9s4WVBM-YLxwB05Q1NKQupTFkXnp1-8T9Dc2k_sLsRByOUlmjDJSUS4nKI4SdedacA8eEmvh0rXfgpN3xn3LL-1AS81urXlnvFceG9VAZ23ae2g2xy8BD4aaAdX7dordFHq2sL8mDP0un5MV89-_PK0WS1jP6c4GPwSBA0xIbmmhBeC5QEOiGCYS8BZKbIww5ICLwUrIIoo45HOacgk1RiCoMRshu4Ou73pvkawg2oqm0Nd6xa60SpGiJCSUcEcSg9objprDZSqN1WjzY8iWO1Eqr1ItROpjiJd6fa4P2YNFKfKnzYH3ByACgD-LQpKnHH2C5tZdk8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3117993273</pqid></control><display><type>article</type><title>STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Hao ; Wang, Wei ; Wang, Mengzhu ; Tan, Huibin ; Lan, Long ; Luo, Zhigang ; Liu, Xinwang ; Li, Kenli</creator><creatorcontrib>Li, Hao ; Wang, Wei ; Wang, Mengzhu ; Tan, Huibin ; Lan, Long ; Luo, Zhigang ; Liu, Xinwang ; Li, Kenli</creatorcontrib><description>Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder inputs are low-resolution due to computational cost, resulting in a loss of fine-grained information, sensitivity to background interference, and poor handling of small objects. Moreover, the queries are randomly initialized without location information, hindering convergence efficiency and accurate object instance localization. To address these issues, we propose a novel VIS approach, STFormer, with a spatial-temporal feature aggregation (STFA) module and spatial-temporal-aware Transformer (STT). Specifically, STFA obtains robust high-resolution masked features efficiently for the decoder, while STT's location-guided instance query (LGIQ) improves initial instance queries. STFormer preserves more fine-grained information, improves convergence efficiency, and localizes object instance features accurately. Extensive experiments on YouTube-VIS 2019, YouTube-VIS 2021, and OVIS datasets show that STFormer outperforms mainstream VIS methods.</description><identifier>ISSN: 2162-237X</identifier><identifier>ISSN: 2162-2388</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2024.3455551</identifier><identifier>PMID: 39418149</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Computational modeling ; Computer vision ; Convergence ; Decoding ; Fine-grained information ; Head ; Instance segmentation ; Interference ; Motion segmentation ; Object recognition ; object tracking ; transformer ; Transformers ; video instance segmentation (VIS)</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-10, Vol.PP, p.1-15</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>dreamkily@gmail.com ; leemohao9695@gmail.com ; long.lan@nudt.edu.cn ; tanhuibin2815@163.com ; lkl@hnu.edu.cn ; wangwei29@mail.sysu.edu.cn ; xinwangliu@nudt.edu.cn ; zgluo@nudt.edu.cn</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10721237$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10721237$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39418149$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Hao</creatorcontrib><creatorcontrib>Wang, Wei</creatorcontrib><creatorcontrib>Wang, Mengzhu</creatorcontrib><creatorcontrib>Tan, Huibin</creatorcontrib><creatorcontrib>Lan, Long</creatorcontrib><creatorcontrib>Luo, Zhigang</creatorcontrib><creatorcontrib>Liu, Xinwang</creatorcontrib><creatorcontrib>Li, Kenli</creatorcontrib><title>STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder inputs are low-resolution due to computational cost, resulting in a loss of fine-grained information, sensitivity to background interference, and poor handling of small objects. Moreover, the queries are randomly initialized without location information, hindering convergence efficiency and accurate object instance localization. To address these issues, we propose a novel VIS approach, STFormer, with a spatial-temporal feature aggregation (STFA) module and spatial-temporal-aware Transformer (STT). Specifically, STFA obtains robust high-resolution masked features efficiently for the decoder, while STT's location-guided instance query (LGIQ) improves initial instance queries. STFormer preserves more fine-grained information, improves convergence efficiency, and localizes object instance features accurately. Extensive experiments on YouTube-VIS 2019, YouTube-VIS 2021, and OVIS datasets show that STFormer outperforms mainstream VIS methods.</description><subject>Computational modeling</subject><subject>Computer vision</subject><subject>Convergence</subject><subject>Decoding</subject><subject>Fine-grained information</subject><subject>Head</subject><subject>Instance segmentation</subject><subject>Interference</subject><subject>Motion segmentation</subject><subject>Object recognition</subject><subject>object tracking</subject><subject>transformer</subject><subject>Transformers</subject><subject>video instance segmentation (VIS)</subject><issn>2162-237X</issn><issn>2162-2388</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEtLw0AUhQdRbKn9AyKSpZvUeSWTcVeK1UKoi0RxN0ySG4nk5UyC-O-dPizezbmL75zFh9A1wQtCsLxPt9s4WVBM-YLxwB05Q1NKQupTFkXnp1-8T9Dc2k_sLsRByOUlmjDJSUS4nKI4SdedacA8eEmvh0rXfgpN3xn3LL-1AS81urXlnvFceG9VAZ23ae2g2xy8BD4aaAdX7dordFHq2sL8mDP0un5MV89-_PK0WS1jP6c4GPwSBA0xIbmmhBeC5QEOiGCYS8BZKbIww5ICLwUrIIoo45HOacgk1RiCoMRshu4Ou73pvkawg2oqm0Nd6xa60SpGiJCSUcEcSg9objprDZSqN1WjzY8iWO1Eqr1ItROpjiJd6fa4P2YNFKfKnzYH3ByACgD-LQpKnHH2C5tZdk8</recordid><startdate>20241017</startdate><enddate>20241017</enddate><creator>Li, Hao</creator><creator>Wang, Wei</creator><creator>Wang, Mengzhu</creator><creator>Tan, Huibin</creator><creator>Lan, Long</creator><creator>Luo, Zhigang</creator><creator>Liu, Xinwang</creator><creator>Li, Kenli</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/dreamkily@gmail.com</orcidid><orcidid>https://orcid.org/leemohao9695@gmail.com</orcidid><orcidid>https://orcid.org/long.lan@nudt.edu.cn</orcidid><orcidid>https://orcid.org/tanhuibin2815@163.com</orcidid><orcidid>https://orcid.org/lkl@hnu.edu.cn</orcidid><orcidid>https://orcid.org/wangwei29@mail.sysu.edu.cn</orcidid><orcidid>https://orcid.org/xinwangliu@nudt.edu.cn</orcidid><orcidid>https://orcid.org/zgluo@nudt.edu.cn</orcidid></search><sort><creationdate>20241017</creationdate><title>STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation</title><author>Li, Hao ; Wang, Wei ; Wang, Mengzhu ; Tan, Huibin ; Lan, Long ; Luo, Zhigang ; Liu, Xinwang ; Li, Kenli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c205t-fe726011ca214d73c505173049e0bf7b6b092e4f73de882348ac26392a0e55f03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computational modeling</topic><topic>Computer vision</topic><topic>Convergence</topic><topic>Decoding</topic><topic>Fine-grained information</topic><topic>Head</topic><topic>Instance segmentation</topic><topic>Interference</topic><topic>Motion segmentation</topic><topic>Object recognition</topic><topic>object tracking</topic><topic>transformer</topic><topic>Transformers</topic><topic>video instance segmentation (VIS)</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Hao</creatorcontrib><creatorcontrib>Wang, Wei</creatorcontrib><creatorcontrib>Wang, Mengzhu</creatorcontrib><creatorcontrib>Tan, Huibin</creatorcontrib><creatorcontrib>Lan, Long</creatorcontrib><creatorcontrib>Luo, Zhigang</creatorcontrib><creatorcontrib>Liu, Xinwang</creatorcontrib><creatorcontrib>Li, Kenli</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Hao</au><au>Wang, Wei</au><au>Wang, Mengzhu</au><au>Tan, Huibin</au><au>Lan, Long</au><au>Luo, Zhigang</au><au>Liu, Xinwang</au><au>Li, Kenli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-10-17</date><risdate>2024</risdate><volume>PP</volume><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>2162-237X</issn><issn>2162-2388</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Video instance segmentation (VIS) is a challenging task, requiring handling object classification, segmentation, and tracking in videos. Existing Transformer-based VIS approaches have shown remarkable success, combining encoded features and instance queries as decoder inputs. However, their decoder inputs are low-resolution due to computational cost, resulting in a loss of fine-grained information, sensitivity to background interference, and poor handling of small objects. Moreover, the queries are randomly initialized without location information, hindering convergence efficiency and accurate object instance localization. To address these issues, we propose a novel VIS approach, STFormer, with a spatial-temporal feature aggregation (STFA) module and spatial-temporal-aware Transformer (STT). Specifically, STFA obtains robust high-resolution masked features efficiently for the decoder, while STT's location-guided instance query (LGIQ) improves initial instance queries. STFormer preserves more fine-grained information, improves convergence efficiency, and localizes object instance features accurately. Extensive experiments on YouTube-VIS 2019, YouTube-VIS 2021, and OVIS datasets show that STFormer outperforms mainstream VIS methods.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>39418149</pmid><doi>10.1109/TNNLS.2024.3455551</doi><tpages>15</tpages><orcidid>https://orcid.org/dreamkily@gmail.com</orcidid><orcidid>https://orcid.org/leemohao9695@gmail.com</orcidid><orcidid>https://orcid.org/long.lan@nudt.edu.cn</orcidid><orcidid>https://orcid.org/tanhuibin2815@163.com</orcidid><orcidid>https://orcid.org/lkl@hnu.edu.cn</orcidid><orcidid>https://orcid.org/wangwei29@mail.sysu.edu.cn</orcidid><orcidid>https://orcid.org/xinwangliu@nudt.edu.cn</orcidid><orcidid>https://orcid.org/zgluo@nudt.edu.cn</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2024-10, Vol.PP, p.1-15
issn 2162-237X
2162-2388
2162-2388
language eng
recordid cdi_crossref_primary_10_1109_TNNLS_2024_3455551
source IEEE Electronic Library (IEL)
subjects Computational modeling
Computer vision
Convergence
Decoding
Fine-grained information
Head
Instance segmentation
Interference
Motion segmentation
Object recognition
object tracking
transformer
Transformers
video instance segmentation (VIS)
title STFormer: Spatial-Temporal-Aware Transformer for Video Instance Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T08%3A23%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=STFormer:%20Spatial-Temporal-Aware%20Transformer%20for%20Video%20Instance%20Segmentation&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Li,%20Hao&rft.date=2024-10-17&rft.volume=PP&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2024.3455551&rft_dat=%3Cproquest_RIE%3E3117993273%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3117993273&rft_id=info:pmid/39418149&rft_ieee_id=10721237&rfr_iscdi=true