Toward Accurate Pixelwise Object Tracking via Attention Retrieval

Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2021, Vol.30, p.8553-8566
Hauptverfasser: Zhang, Zhipeng, Liu, Yufan, Li, Bing, Hu, Weiming, Peng, Houwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8566
container_issue
container_start_page 8553
container_title IEEE transactions on image processing
container_volume 30
creator Zhang, Zhipeng
Liu, Yufan
Li, Bing
Hu, Weiming
Peng, Houwen
description Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch from the tracking model to predict segmentation mask. Although efficient, directly reusing features from tracking networks may harm the segmentation accuracy, since background clutter in the backbone feature tends to introduce false positives in segmentation. To mitigate this problem, we propose a unified tracking-retrieval-segmentation framework consisting of an attention retrieval network (ARN) and an iterative feedback network (IFN). Instead of segmenting the target inside the bounding box, the proposed framework performs soft spatial constraints on backbone features to obtain an accurate global segmentation map. Concretely, in ARN, a look-up-table (LUT) is first built by sufficiently using the information of the first frame. By retrieving it, a target-aware attention map is generated to suppress the negative influence of background clutter. To ulteriorly refine the contour of the segmentation, IFN iteratively enhances the features at different resolutions by taking the predicted mask as feedback guidance. Our framework sets a new state of the art on the recent pixelwise tracking benchmark VOT2020 and runs at 40 fps. Notably, the proposed model surpasses SiamMask by 11.7/4.2/5.5 points on VOT2020, DAVIS2016, and DAVIS2017, respectively. Code is available at https://github.com/JudasDie/SOTS .
doi_str_mv 10.1109/TIP.2021.3117077
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIP_2021_3117077</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9563126</ieee_id><sourcerecordid>2580692612</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-d0466021b1f18ddd500d5afbf7c398abe6c46a58aedd32f7e8134000698d0bac3</originalsourceid><addsrcrecordid>eNqNkMFrFDEUh4MotlbvgpcBL4LM-l6SSTLHZalaKLTIeh4yyRvJOp2pSaar_70Ztih4MpeXw_e99-PH2GuEDSK0H_ZXtxsOHDcCUYPWT9g5thJrAMmflj80utYo2zP2IqUDAMoG1XN2JqRCo7Q4Z9v9fLTRV1vnlmgzVbfhJ43HkKi66Q_kcrWP1n0P07fqIdhqmzNNOcxT9YVyDPRgx5fs2WDHRK8e5wX7-vFyv_tcX998utptr2snuMy1B6lUidrjgMZ73wD4xg79oJ1oje1JOalsYyx5L_igyaCQAKBa46G3Tlywd6e993H-sVDK3V1IjsbRTjQvqeONKTBXyAv69h_0MC9xKulWChsNRppCwYlycU4p0tDdx3Bn468OoVvr7Uq93Vpv91hvUd6flCP185BcoMnRH62k1aCMKGx56wHz__QuZLsWu5uXKRf1zUkNRH-VtlECuRK_AaAEk8o</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2581570848</pqid></control><display><type>article</type><title>Toward Accurate Pixelwise Object Tracking via Attention Retrieval</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Zhipeng ; Liu, Yufan ; Li, Bing ; Hu, Weiming ; Peng, Houwen</creator><creatorcontrib>Zhang, Zhipeng ; Liu, Yufan ; Li, Bing ; Hu, Weiming ; Peng, Houwen</creatorcontrib><description>Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch from the tracking model to predict segmentation mask. Although efficient, directly reusing features from tracking networks may harm the segmentation accuracy, since background clutter in the backbone feature tends to introduce false positives in segmentation. To mitigate this problem, we propose a unified tracking-retrieval-segmentation framework consisting of an attention retrieval network (ARN) and an iterative feedback network (IFN). Instead of segmenting the target inside the bounding box, the proposed framework performs soft spatial constraints on backbone features to obtain an accurate global segmentation map. Concretely, in ARN, a look-up-table (LUT) is first built by sufficiently using the information of the first frame. By retrieving it, a target-aware attention map is generated to suppress the negative influence of background clutter. To ulteriorly refine the contour of the segmentation, IFN iteratively enhances the features at different resolutions by taking the predicted mask as feedback guidance. Our framework sets a new state of the art on the recent pixelwise tracking benchmark VOT2020 and runs at 40 fps. Notably, the proposed model surpasses SiamMask by 11.7/4.2/5.5 points on VOT2020, DAVIS2016, and DAVIS2017, respectively. Code is available at https://github.com/JudasDie/SOTS .</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2021.3117077</identifier><identifier>PMID: 34618673</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>PISCATAWAY: IEEE</publisher><subject>Accuracy ; attention retrieval ; Benchmark testing ; Clutter ; Computer networks ; Computer Science ; Computer Science, Artificial Intelligence ; Engineering ; Engineering, Electrical &amp; Electronic ; Feedback ; Image segmentation ; Iterative methods ; Object tracking ; object tracking and segmentation ; Pixelwise tracking ; Predictive models ; Retrieval ; Science &amp; Technology ; Segmentation ; Table lookup ; Target tracking ; Technology ; Tracking networks</subject><ispartof>IEEE transactions on image processing, 2021, Vol.30, p.8553-8566</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>17</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000706831700008</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c324t-d0466021b1f18ddd500d5afbf7c398abe6c46a58aedd32f7e8134000698d0bac3</citedby><cites>FETCH-LOGICAL-c324t-d0466021b1f18ddd500d5afbf7c398abe6c46a58aedd32f7e8134000698d0bac3</cites><orcidid>0000-0002-5888-6735 ; 0000-0002-8426-9335 ; 0000-0003-0479-332X ; 0000-0001-9237-8825</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9563126$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,4025,27927,27928,27929,54762</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9563126$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Zhipeng</creatorcontrib><creatorcontrib>Liu, Yufan</creatorcontrib><creatorcontrib>Li, Bing</creatorcontrib><creatorcontrib>Hu, Weiming</creatorcontrib><creatorcontrib>Peng, Houwen</creatorcontrib><title>Toward Accurate Pixelwise Object Tracking via Attention Retrieval</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE T IMAGE PROCESS</addtitle><description>Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch from the tracking model to predict segmentation mask. Although efficient, directly reusing features from tracking networks may harm the segmentation accuracy, since background clutter in the backbone feature tends to introduce false positives in segmentation. To mitigate this problem, we propose a unified tracking-retrieval-segmentation framework consisting of an attention retrieval network (ARN) and an iterative feedback network (IFN). Instead of segmenting the target inside the bounding box, the proposed framework performs soft spatial constraints on backbone features to obtain an accurate global segmentation map. Concretely, in ARN, a look-up-table (LUT) is first built by sufficiently using the information of the first frame. By retrieving it, a target-aware attention map is generated to suppress the negative influence of background clutter. To ulteriorly refine the contour of the segmentation, IFN iteratively enhances the features at different resolutions by taking the predicted mask as feedback guidance. Our framework sets a new state of the art on the recent pixelwise tracking benchmark VOT2020 and runs at 40 fps. Notably, the proposed model surpasses SiamMask by 11.7/4.2/5.5 points on VOT2020, DAVIS2016, and DAVIS2017, respectively. Code is available at https://github.com/JudasDie/SOTS .</description><subject>Accuracy</subject><subject>attention retrieval</subject><subject>Benchmark testing</subject><subject>Clutter</subject><subject>Computer networks</subject><subject>Computer Science</subject><subject>Computer Science, Artificial Intelligence</subject><subject>Engineering</subject><subject>Engineering, Electrical &amp; Electronic</subject><subject>Feedback</subject><subject>Image segmentation</subject><subject>Iterative methods</subject><subject>Object tracking</subject><subject>object tracking and segmentation</subject><subject>Pixelwise tracking</subject><subject>Predictive models</subject><subject>Retrieval</subject><subject>Science &amp; Technology</subject><subject>Segmentation</subject><subject>Table lookup</subject><subject>Target tracking</subject><subject>Technology</subject><subject>Tracking networks</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>HGBXW</sourceid><recordid>eNqNkMFrFDEUh4MotlbvgpcBL4LM-l6SSTLHZalaKLTIeh4yyRvJOp2pSaar_70Ztih4MpeXw_e99-PH2GuEDSK0H_ZXtxsOHDcCUYPWT9g5thJrAMmflj80utYo2zP2IqUDAMoG1XN2JqRCo7Q4Z9v9fLTRV1vnlmgzVbfhJ43HkKi66Q_kcrWP1n0P07fqIdhqmzNNOcxT9YVyDPRgx5fs2WDHRK8e5wX7-vFyv_tcX998utptr2snuMy1B6lUidrjgMZ73wD4xg79oJ1oje1JOalsYyx5L_igyaCQAKBa46G3Tlywd6e993H-sVDK3V1IjsbRTjQvqeONKTBXyAv69h_0MC9xKulWChsNRppCwYlycU4p0tDdx3Bn468OoVvr7Uq93Vpv91hvUd6flCP185BcoMnRH62k1aCMKGx56wHz__QuZLsWu5uXKRf1zUkNRH-VtlECuRK_AaAEk8o</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Zhang, Zhipeng</creator><creator>Liu, Yufan</creator><creator>Li, Bing</creator><creator>Hu, Weiming</creator><creator>Peng, Houwen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>BLEPL</scope><scope>DTL</scope><scope>HGBXW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5888-6735</orcidid><orcidid>https://orcid.org/0000-0002-8426-9335</orcidid><orcidid>https://orcid.org/0000-0003-0479-332X</orcidid><orcidid>https://orcid.org/0000-0001-9237-8825</orcidid></search><sort><creationdate>2021</creationdate><title>Toward Accurate Pixelwise Object Tracking via Attention Retrieval</title><author>Zhang, Zhipeng ; Liu, Yufan ; Li, Bing ; Hu, Weiming ; Peng, Houwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-d0466021b1f18ddd500d5afbf7c398abe6c46a58aedd32f7e8134000698d0bac3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>attention retrieval</topic><topic>Benchmark testing</topic><topic>Clutter</topic><topic>Computer networks</topic><topic>Computer Science</topic><topic>Computer Science, Artificial Intelligence</topic><topic>Engineering</topic><topic>Engineering, Electrical &amp; Electronic</topic><topic>Feedback</topic><topic>Image segmentation</topic><topic>Iterative methods</topic><topic>Object tracking</topic><topic>object tracking and segmentation</topic><topic>Pixelwise tracking</topic><topic>Predictive models</topic><topic>Retrieval</topic><topic>Science &amp; Technology</topic><topic>Segmentation</topic><topic>Table lookup</topic><topic>Target tracking</topic><topic>Technology</topic><topic>Tracking networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Zhipeng</creatorcontrib><creatorcontrib>Liu, Yufan</creatorcontrib><creatorcontrib>Li, Bing</creatorcontrib><creatorcontrib>Hu, Weiming</creatorcontrib><creatorcontrib>Peng, Houwen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>Web of Science - Science Citation Index Expanded - 2021</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Zhipeng</au><au>Liu, Yufan</au><au>Li, Bing</au><au>Hu, Weiming</au><au>Peng, Houwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward Accurate Pixelwise Object Tracking via Attention Retrieval</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><stitle>IEEE T IMAGE PROCESS</stitle><date>2021</date><risdate>2021</risdate><volume>30</volume><spage>8553</spage><epage>8566</epage><pages>8553-8566</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch from the tracking model to predict segmentation mask. Although efficient, directly reusing features from tracking networks may harm the segmentation accuracy, since background clutter in the backbone feature tends to introduce false positives in segmentation. To mitigate this problem, we propose a unified tracking-retrieval-segmentation framework consisting of an attention retrieval network (ARN) and an iterative feedback network (IFN). Instead of segmenting the target inside the bounding box, the proposed framework performs soft spatial constraints on backbone features to obtain an accurate global segmentation map. Concretely, in ARN, a look-up-table (LUT) is first built by sufficiently using the information of the first frame. By retrieving it, a target-aware attention map is generated to suppress the negative influence of background clutter. To ulteriorly refine the contour of the segmentation, IFN iteratively enhances the features at different resolutions by taking the predicted mask as feedback guidance. Our framework sets a new state of the art on the recent pixelwise tracking benchmark VOT2020 and runs at 40 fps. Notably, the proposed model surpasses SiamMask by 11.7/4.2/5.5 points on VOT2020, DAVIS2016, and DAVIS2017, respectively. Code is available at https://github.com/JudasDie/SOTS .</abstract><cop>PISCATAWAY</cop><pub>IEEE</pub><pmid>34618673</pmid><doi>10.1109/TIP.2021.3117077</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-5888-6735</orcidid><orcidid>https://orcid.org/0000-0002-8426-9335</orcidid><orcidid>https://orcid.org/0000-0003-0479-332X</orcidid><orcidid>https://orcid.org/0000-0001-9237-8825</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2021, Vol.30, p.8553-8566
issn 1057-7149
1941-0042
language eng
recordid cdi_crossref_primary_10_1109_TIP_2021_3117077
source IEEE Electronic Library (IEL)
subjects Accuracy
attention retrieval
Benchmark testing
Clutter
Computer networks
Computer Science
Computer Science, Artificial Intelligence
Engineering
Engineering, Electrical & Electronic
Feedback
Image segmentation
Iterative methods
Object tracking
object tracking and segmentation
Pixelwise tracking
Predictive models
Retrieval
Science & Technology
Segmentation
Table lookup
Target tracking
Technology
Tracking networks
title Toward Accurate Pixelwise Object Tracking via Attention Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T11%3A29%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20Accurate%20Pixelwise%20Object%20Tracking%20via%20Attention%20Retrieval&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Zhang,%20Zhipeng&rft.date=2021&rft.volume=30&rft.spage=8553&rft.epage=8566&rft.pages=8553-8566&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2021.3117077&rft_dat=%3Cproquest_RIE%3E2580692612%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2581570848&rft_id=info:pmid/34618673&rft_ieee_id=9563126&rfr_iscdi=true