Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection

Besides combining appearance and motion information, another crucial factor for video salient object detection (VSOD) is to mine spatial-temporal (ST) knowledge, including complementary long-short temporal cues and global-local spatial context from neighboring frames. However, the existing methods o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-08, Vol.35 (8), p.10663-10673
Hauptverfasser: Liu, Nian, Nan, Kepan, Zhao, Wangbo, Yao, Xiwen, Han, Junwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 10673
container_issue 8
container_start_page 10663
container_title IEEE transaction on neural networks and learning systems
container_volume 35
creator Liu, Nian
Nan, Kepan
Zhao, Wangbo
Yao, Xiwen
Han, Junwei
description Besides combining appearance and motion information, another crucial factor for video salient object detection (VSOD) is to mine spatial-temporal (ST) knowledge, including complementary long-short temporal cues and global-local spatial context from neighboring frames. However, the existing methods only explored part of them and ignored their complementarity. In this article, we propose a novel complementary ST transformer (CoSTFormer) for VSOD, which has a short-global branch and a long-local branch to aggregate complementary ST contexts. The former integrates the global context from the neighboring two frames using dense pairwise attention, while the latter is designed to fuse long-term temporal information from more consecutive frames with local attention windows. In this way, we decompose the ST context into a short-global part and a long-local part and leverage the powerful transformer to model the context relationship and learn their complementarity. To solve the contradiction between local window attention and object motion, we propose a novel flow-guided window attention (FGWA) mechanism to align the attention windows with object and camera movements. Furthermore, we deploy CoSTFormer on fused appearance and motion features, thus enabling the effective combination of all three VSOD factors. Besides, we present a pseudo video generation method to synthesize sufficient video clips from static images for training ST saliency models. Extensive experiments have verified the effectiveness of our method and illustrated that we achieve new state-of-the-art results on several benchmark datasets.
doi_str_mv 10.1109/TNNLS.2023.3243246
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TNNLS_2023_3243246</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10045751</ieee_id><sourcerecordid>2798710852</sourcerecordid><originalsourceid>FETCH-LOGICAL-c254t-f9c86b1e0654d1b5fb75e931f16c6f2a0c2d6b265071065f603afd9883e019603</originalsourceid><addsrcrecordid>eNpNkEtLAzEUhYMottT-ARGZpZupeUySmaXUJ5R20bG4C5mZG0mZl8l04b83tbV4uXAS-M6BexC6JnhGCM7u8-VysZ5RTNmM0SSsOENjSgSNKUvT89NbfozQ1PstDiMwF0l2iUZMYiqlTMdoswDtWtt-RvOu6WtooB20-47WvR6sruMcmr5zuo5yp1tvOteAi4JEG1tBF611bYMjWhVbKIfoEYYgtmuv0IXRtYfpUSfo_fkpn7_Gi9XL2_xhEZeUJ0NssjIVBQEseFKRgptCcsgYMUSUwlCNS1qJggqOJQmMEZhpU2VpygCTLPwm6O6Q27vuawd-UI31JdS1bqHbeUVllgZrymlA6QEtXee9A6N6Z5twqyJY7StVv5WqfaXqWGkw3R7zd0UD1cnyV2AAbg6ABYB_iTjhkhP2AwLveqM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2798710852</pqid></control><display><type>article</type><title>Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Nian ; Nan, Kepan ; Zhao, Wangbo ; Yao, Xiwen ; Han, Junwei</creator><creatorcontrib>Liu, Nian ; Nan, Kepan ; Zhao, Wangbo ; Yao, Xiwen ; Han, Junwei</creatorcontrib><description>Besides combining appearance and motion information, another crucial factor for video salient object detection (VSOD) is to mine spatial-temporal (ST) knowledge, including complementary long-short temporal cues and global-local spatial context from neighboring frames. However, the existing methods only explored part of them and ignored their complementarity. In this article, we propose a novel complementary ST transformer (CoSTFormer) for VSOD, which has a short-global branch and a long-local branch to aggregate complementary ST contexts. The former integrates the global context from the neighboring two frames using dense pairwise attention, while the latter is designed to fuse long-term temporal information from more consecutive frames with local attention windows. In this way, we decompose the ST context into a short-global part and a long-local part and leverage the powerful transformer to model the context relationship and learn their complementarity. To solve the contradiction between local window attention and object motion, we propose a novel flow-guided window attention (FGWA) mechanism to align the attention windows with object and camera movements. Furthermore, we deploy CoSTFormer on fused appearance and motion features, thus enabling the effective combination of all three VSOD factors. Besides, we present a pseudo video generation method to synthesize sufficient video clips from static images for training ST saliency models. Extensive experiments have verified the effectiveness of our method and illustrated that we achieve new state-of-the-art results on several benchmark datasets.</description><identifier>ISSN: 2162-237X</identifier><identifier>ISSN: 2162-2388</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2023.3243246</identifier><identifier>PMID: 37027778</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Aggregates ; Attention models ; Computational modeling ; Context modeling ; Feature extraction ; Object detection ; Optical flow ; saliency detection ; transformer ; Transformers ; video salient object detection (VSOD)</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-08, Vol.35 (8), p.10663-10673</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c254t-f9c86b1e0654d1b5fb75e931f16c6f2a0c2d6b265071065f603afd9883e019603</citedby><cites>FETCH-LOGICAL-c254t-f9c86b1e0654d1b5fb75e931f16c6f2a0c2d6b265071065f603afd9883e019603</cites><orcidid>0000-0002-7466-7428 ; 0000-0002-0825-6081 ; 0000-0001-5545-7217 ; 0000-0001-9545-7991</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10045751$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10045751$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37027778$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Nian</creatorcontrib><creatorcontrib>Nan, Kepan</creatorcontrib><creatorcontrib>Zhao, Wangbo</creatorcontrib><creatorcontrib>Yao, Xiwen</creatorcontrib><creatorcontrib>Han, Junwei</creatorcontrib><title>Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Besides combining appearance and motion information, another crucial factor for video salient object detection (VSOD) is to mine spatial-temporal (ST) knowledge, including complementary long-short temporal cues and global-local spatial context from neighboring frames. However, the existing methods only explored part of them and ignored their complementarity. In this article, we propose a novel complementary ST transformer (CoSTFormer) for VSOD, which has a short-global branch and a long-local branch to aggregate complementary ST contexts. The former integrates the global context from the neighboring two frames using dense pairwise attention, while the latter is designed to fuse long-term temporal information from more consecutive frames with local attention windows. In this way, we decompose the ST context into a short-global part and a long-local part and leverage the powerful transformer to model the context relationship and learn their complementarity. To solve the contradiction between local window attention and object motion, we propose a novel flow-guided window attention (FGWA) mechanism to align the attention windows with object and camera movements. Furthermore, we deploy CoSTFormer on fused appearance and motion features, thus enabling the effective combination of all three VSOD factors. Besides, we present a pseudo video generation method to synthesize sufficient video clips from static images for training ST saliency models. Extensive experiments have verified the effectiveness of our method and illustrated that we achieve new state-of-the-art results on several benchmark datasets.</description><subject>Aggregates</subject><subject>Attention models</subject><subject>Computational modeling</subject><subject>Context modeling</subject><subject>Feature extraction</subject><subject>Object detection</subject><subject>Optical flow</subject><subject>saliency detection</subject><subject>transformer</subject><subject>Transformers</subject><subject>video salient object detection (VSOD)</subject><issn>2162-237X</issn><issn>2162-2388</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEtLAzEUhYMottT-ARGZpZupeUySmaXUJ5R20bG4C5mZG0mZl8l04b83tbV4uXAS-M6BexC6JnhGCM7u8-VysZ5RTNmM0SSsOENjSgSNKUvT89NbfozQ1PstDiMwF0l2iUZMYiqlTMdoswDtWtt-RvOu6WtooB20-47WvR6sruMcmr5zuo5yp1tvOteAi4JEG1tBF611bYMjWhVbKIfoEYYgtmuv0IXRtYfpUSfo_fkpn7_Gi9XL2_xhEZeUJ0NssjIVBQEseFKRgptCcsgYMUSUwlCNS1qJggqOJQmMEZhpU2VpygCTLPwm6O6Q27vuawd-UI31JdS1bqHbeUVllgZrymlA6QEtXee9A6N6Z5twqyJY7StVv5WqfaXqWGkw3R7zd0UD1cnyV2AAbg6ABYB_iTjhkhP2AwLveqM</recordid><startdate>202408</startdate><enddate>202408</enddate><creator>Liu, Nian</creator><creator>Nan, Kepan</creator><creator>Zhao, Wangbo</creator><creator>Yao, Xiwen</creator><creator>Han, Junwei</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-7466-7428</orcidid><orcidid>https://orcid.org/0000-0002-0825-6081</orcidid><orcidid>https://orcid.org/0000-0001-5545-7217</orcidid><orcidid>https://orcid.org/0000-0001-9545-7991</orcidid></search><sort><creationdate>202408</creationdate><title>Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection</title><author>Liu, Nian ; Nan, Kepan ; Zhao, Wangbo ; Yao, Xiwen ; Han, Junwei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c254t-f9c86b1e0654d1b5fb75e931f16c6f2a0c2d6b265071065f603afd9883e019603</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Aggregates</topic><topic>Attention models</topic><topic>Computational modeling</topic><topic>Context modeling</topic><topic>Feature extraction</topic><topic>Object detection</topic><topic>Optical flow</topic><topic>saliency detection</topic><topic>transformer</topic><topic>Transformers</topic><topic>video salient object detection (VSOD)</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Nian</creatorcontrib><creatorcontrib>Nan, Kepan</creatorcontrib><creatorcontrib>Zhao, Wangbo</creatorcontrib><creatorcontrib>Yao, Xiwen</creatorcontrib><creatorcontrib>Han, Junwei</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Nian</au><au>Nan, Kepan</au><au>Zhao, Wangbo</au><au>Yao, Xiwen</au><au>Han, Junwei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-08</date><risdate>2024</risdate><volume>35</volume><issue>8</issue><spage>10663</spage><epage>10673</epage><pages>10663-10673</pages><issn>2162-237X</issn><issn>2162-2388</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Besides combining appearance and motion information, another crucial factor for video salient object detection (VSOD) is to mine spatial-temporal (ST) knowledge, including complementary long-short temporal cues and global-local spatial context from neighboring frames. However, the existing methods only explored part of them and ignored their complementarity. In this article, we propose a novel complementary ST transformer (CoSTFormer) for VSOD, which has a short-global branch and a long-local branch to aggregate complementary ST contexts. The former integrates the global context from the neighboring two frames using dense pairwise attention, while the latter is designed to fuse long-term temporal information from more consecutive frames with local attention windows. In this way, we decompose the ST context into a short-global part and a long-local part and leverage the powerful transformer to model the context relationship and learn their complementarity. To solve the contradiction between local window attention and object motion, we propose a novel flow-guided window attention (FGWA) mechanism to align the attention windows with object and camera movements. Furthermore, we deploy CoSTFormer on fused appearance and motion features, thus enabling the effective combination of all three VSOD factors. Besides, we present a pseudo video generation method to synthesize sufficient video clips from static images for training ST saliency models. Extensive experiments have verified the effectiveness of our method and illustrated that we achieve new state-of-the-art results on several benchmark datasets.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37027778</pmid><doi>10.1109/TNNLS.2023.3243246</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-7466-7428</orcidid><orcidid>https://orcid.org/0000-0002-0825-6081</orcidid><orcidid>https://orcid.org/0000-0001-5545-7217</orcidid><orcidid>https://orcid.org/0000-0001-9545-7991</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2024-08, Vol.35 (8), p.10663-10673
issn 2162-237X
2162-2388
2162-2388
language eng
recordid cdi_crossref_primary_10_1109_TNNLS_2023_3243246
source IEEE Electronic Library (IEL)
subjects Aggregates
Attention models
Computational modeling
Context modeling
Feature extraction
Object detection
Optical flow
saliency detection
transformer
Transformers
video salient object detection (VSOD)
title Learning Complementary Spatial-Temporal Transformer for Video Salient Object Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T09%3A29%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Complementary%20Spatial-Temporal%20Transformer%20for%20Video%20Salient%20Object%20Detection&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Liu,%20Nian&rft.date=2024-08&rft.volume=35&rft.issue=8&rft.spage=10663&rft.epage=10673&rft.pages=10663-10673&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2023.3243246&rft_dat=%3Cproquest_RIE%3E2798710852%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2798710852&rft_id=info:pmid/37027778&rft_ieee_id=10045751&rfr_iscdi=true