Inverse Sparse Group Lasso Model for Robust Object Tracking

Sparse representation has been applied to visual tracking. The visual tracking models based on sparse representation use a template set as dictionary atoms to reconstruct candidate samples without considering similarity among atoms. In this paper, we present a robust tracking method based on the inv...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2017-08, Vol.19 (8), p.1798-1810
Hauptverfasser: Zhou, Yun, Han, Jianghong, Yuan, Xiaohui, Wei, Zhenchun, Hong, Richang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1810
container_issue 8
container_start_page 1798
container_title IEEE transactions on multimedia
container_volume 19
creator Zhou, Yun
Han, Jianghong
Yuan, Xiaohui
Wei, Zhenchun
Hong, Richang
description Sparse representation has been applied to visual tracking. The visual tracking models based on sparse representation use a template set as dictionary atoms to reconstruct candidate samples without considering similarity among atoms. In this paper, we present a robust tracking method based on the inverse sparse group lasso model. Our method exploits both the group structure of similar candidate samples and the local structure between templates and samples. Unlike the conventional sparse representation, the templates are encoded by the candidate samples, and similar samples are selected to reconstruct the template at the group level, which facilitates inter-group sparsity. Every sample group achieves the intra-group sparsity so that the information between the related dictionary atoms is taken into account. Moreover, the local structure between templates and samples is considered to build the reconstruction model, which ensures that the computed coefficients similarity is consistent with the similarity between templates and samples. A gradient descent-based optimization method is employed and a sparse mapping table is obtained using the coefficient matrix and hash-distance weight matrix. Experiments were conducted with publicly available datasets and a comparison study was performed against 20 state-of-the-art methods. Both qualitative and quantitative results are reported. The proposed method demonstrated improved robustness and accuracy and exhibited comparable computational complexity.
doi_str_mv 10.1109/TMM.2017.2689918
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_7890458</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7890458</ieee_id><sourcerecordid>1920465910</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-ed0c2ff94fd13869dea399f0ab4bb76b35c58cce7e3909682199983bda8dace03</originalsourceid><addsrcrecordid>eNo9kM9Lw0AQhRdRsFbvgpeA59SZza8dPEmxtdBS0HpeNpuJtNZu3E0E_3sTWjy9OXzvDXxC3CJMEIEeNqvVRAIWE5krIlRnYoSUYgxQFOf9nUmISSJciqsQdgCYZlCMxOPi8MM-cPTWmCHm3nVNtDQhuGjlKt5HtfPRqyu70Ebrcse2jTbe2M_t4eNaXNRmH_jmlGPxPnveTF_i5Xq-mD4tYysJ25grsLKuKa0rTFROFZuEqAZTpmVZ5GWS2UxZywUnBJQriUSkkrIyqjKWIRmL--Nu4913x6HVO9f5Q_9SI0lI84xwoOBIWe9C8Fzrxm-_jP_VCHpQpHtFelCkT4r6yt2xsmXmf7xQBGmmkj9tTGF6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1920465910</pqid></control><display><type>article</type><title>Inverse Sparse Group Lasso Model for Robust Object Tracking</title><source>IEEE Electronic Library (IEL)</source><creator>Zhou, Yun ; Han, Jianghong ; Yuan, Xiaohui ; Wei, Zhenchun ; Hong, Richang</creator><creatorcontrib>Zhou, Yun ; Han, Jianghong ; Yuan, Xiaohui ; Wei, Zhenchun ; Hong, Richang</creatorcontrib><description>Sparse representation has been applied to visual tracking. The visual tracking models based on sparse representation use a template set as dictionary atoms to reconstruct candidate samples without considering similarity among atoms. In this paper, we present a robust tracking method based on the inverse sparse group lasso model. Our method exploits both the group structure of similar candidate samples and the local structure between templates and samples. Unlike the conventional sparse representation, the templates are encoded by the candidate samples, and similar samples are selected to reconstruct the template at the group level, which facilitates inter-group sparsity. Every sample group achieves the intra-group sparsity so that the information between the related dictionary atoms is taken into account. Moreover, the local structure between templates and samples is considered to build the reconstruction model, which ensures that the computed coefficients similarity is consistent with the similarity between templates and samples. A gradient descent-based optimization method is employed and a sparse mapping table is obtained using the coefficient matrix and hash-distance weight matrix. Experiments were conducted with publicly available datasets and a comparison study was performed against 20 state-of-the-art methods. Both qualitative and quantitative results are reported. The proposed method demonstrated improved robustness and accuracy and exhibited comparable computational complexity.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2017.2689918</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Accuracy ; Atomic structure ; Coding ; Complexity ; Computational modeling ; Computer vision ; Dictionaries ; hash distance ; Image reconstruction ; Object tracking ; Optical tracking ; Reconstruction ; Robustness ; Robustness (mathematics) ; Similarity ; sparse coding ; sparse group lasso ; Sparse matrices ; Sparsity ; visual tracking</subject><ispartof>IEEE transactions on multimedia, 2017-08, Vol.19 (8), p.1798-1810</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-ed0c2ff94fd13869dea399f0ab4bb76b35c58cce7e3909682199983bda8dace03</citedby><cites>FETCH-LOGICAL-c291t-ed0c2ff94fd13869dea399f0ab4bb76b35c58cce7e3909682199983bda8dace03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7890458$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7890458$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhou, Yun</creatorcontrib><creatorcontrib>Han, Jianghong</creatorcontrib><creatorcontrib>Yuan, Xiaohui</creatorcontrib><creatorcontrib>Wei, Zhenchun</creatorcontrib><creatorcontrib>Hong, Richang</creatorcontrib><title>Inverse Sparse Group Lasso Model for Robust Object Tracking</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>Sparse representation has been applied to visual tracking. The visual tracking models based on sparse representation use a template set as dictionary atoms to reconstruct candidate samples without considering similarity among atoms. In this paper, we present a robust tracking method based on the inverse sparse group lasso model. Our method exploits both the group structure of similar candidate samples and the local structure between templates and samples. Unlike the conventional sparse representation, the templates are encoded by the candidate samples, and similar samples are selected to reconstruct the template at the group level, which facilitates inter-group sparsity. Every sample group achieves the intra-group sparsity so that the information between the related dictionary atoms is taken into account. Moreover, the local structure between templates and samples is considered to build the reconstruction model, which ensures that the computed coefficients similarity is consistent with the similarity between templates and samples. A gradient descent-based optimization method is employed and a sparse mapping table is obtained using the coefficient matrix and hash-distance weight matrix. Experiments were conducted with publicly available datasets and a comparison study was performed against 20 state-of-the-art methods. Both qualitative and quantitative results are reported. The proposed method demonstrated improved robustness and accuracy and exhibited comparable computational complexity.</description><subject>Accuracy</subject><subject>Atomic structure</subject><subject>Coding</subject><subject>Complexity</subject><subject>Computational modeling</subject><subject>Computer vision</subject><subject>Dictionaries</subject><subject>hash distance</subject><subject>Image reconstruction</subject><subject>Object tracking</subject><subject>Optical tracking</subject><subject>Reconstruction</subject><subject>Robustness</subject><subject>Robustness (mathematics)</subject><subject>Similarity</subject><subject>sparse coding</subject><subject>sparse group lasso</subject><subject>Sparse matrices</subject><subject>Sparsity</subject><subject>visual tracking</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM9Lw0AQhRdRsFbvgpeA59SZza8dPEmxtdBS0HpeNpuJtNZu3E0E_3sTWjy9OXzvDXxC3CJMEIEeNqvVRAIWE5krIlRnYoSUYgxQFOf9nUmISSJciqsQdgCYZlCMxOPi8MM-cPTWmCHm3nVNtDQhuGjlKt5HtfPRqyu70Ebrcse2jTbe2M_t4eNaXNRmH_jmlGPxPnveTF_i5Xq-mD4tYysJ25grsLKuKa0rTFROFZuEqAZTpmVZ5GWS2UxZywUnBJQriUSkkrIyqjKWIRmL--Nu4913x6HVO9f5Q_9SI0lI84xwoOBIWe9C8Fzrxm-_jP_VCHpQpHtFelCkT4r6yt2xsmXmf7xQBGmmkj9tTGF6</recordid><startdate>20170801</startdate><enddate>20170801</enddate><creator>Zhou, Yun</creator><creator>Han, Jianghong</creator><creator>Yuan, Xiaohui</creator><creator>Wei, Zhenchun</creator><creator>Hong, Richang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20170801</creationdate><title>Inverse Sparse Group Lasso Model for Robust Object Tracking</title><author>Zhou, Yun ; Han, Jianghong ; Yuan, Xiaohui ; Wei, Zhenchun ; Hong, Richang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-ed0c2ff94fd13869dea399f0ab4bb76b35c58cce7e3909682199983bda8dace03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Accuracy</topic><topic>Atomic structure</topic><topic>Coding</topic><topic>Complexity</topic><topic>Computational modeling</topic><topic>Computer vision</topic><topic>Dictionaries</topic><topic>hash distance</topic><topic>Image reconstruction</topic><topic>Object tracking</topic><topic>Optical tracking</topic><topic>Reconstruction</topic><topic>Robustness</topic><topic>Robustness (mathematics)</topic><topic>Similarity</topic><topic>sparse coding</topic><topic>sparse group lasso</topic><topic>Sparse matrices</topic><topic>Sparsity</topic><topic>visual tracking</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Yun</creatorcontrib><creatorcontrib>Han, Jianghong</creatorcontrib><creatorcontrib>Yuan, Xiaohui</creatorcontrib><creatorcontrib>Wei, Zhenchun</creatorcontrib><creatorcontrib>Hong, Richang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhou, Yun</au><au>Han, Jianghong</au><au>Yuan, Xiaohui</au><au>Wei, Zhenchun</au><au>Hong, Richang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Inverse Sparse Group Lasso Model for Robust Object Tracking</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2017-08-01</date><risdate>2017</risdate><volume>19</volume><issue>8</issue><spage>1798</spage><epage>1810</epage><pages>1798-1810</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>Sparse representation has been applied to visual tracking. The visual tracking models based on sparse representation use a template set as dictionary atoms to reconstruct candidate samples without considering similarity among atoms. In this paper, we present a robust tracking method based on the inverse sparse group lasso model. Our method exploits both the group structure of similar candidate samples and the local structure between templates and samples. Unlike the conventional sparse representation, the templates are encoded by the candidate samples, and similar samples are selected to reconstruct the template at the group level, which facilitates inter-group sparsity. Every sample group achieves the intra-group sparsity so that the information between the related dictionary atoms is taken into account. Moreover, the local structure between templates and samples is considered to build the reconstruction model, which ensures that the computed coefficients similarity is consistent with the similarity between templates and samples. A gradient descent-based optimization method is employed and a sparse mapping table is obtained using the coefficient matrix and hash-distance weight matrix. Experiments were conducted with publicly available datasets and a comparison study was performed against 20 state-of-the-art methods. Both qualitative and quantitative results are reported. The proposed method demonstrated improved robustness and accuracy and exhibited comparable computational complexity.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2017.2689918</doi><tpages>13</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-9210
ispartof IEEE transactions on multimedia, 2017-08, Vol.19 (8), p.1798-1810
issn 1520-9210
1941-0077
language eng
recordid cdi_ieee_primary_7890458
source IEEE Electronic Library (IEL)
subjects Accuracy
Atomic structure
Coding
Complexity
Computational modeling
Computer vision
Dictionaries
hash distance
Image reconstruction
Object tracking
Optical tracking
Reconstruction
Robustness
Robustness (mathematics)
Similarity
sparse coding
sparse group lasso
Sparse matrices
Sparsity
visual tracking
title Inverse Sparse Group Lasso Model for Robust Object Tracking
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T20%3A27%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Inverse%20Sparse%20Group%20Lasso%20Model%20for%20Robust%20Object%20Tracking&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Zhou,%20Yun&rft.date=2017-08-01&rft.volume=19&rft.issue=8&rft.spage=1798&rft.epage=1810&rft.pages=1798-1810&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2017.2689918&rft_dat=%3Cproquest_RIE%3E1920465910%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1920465910&rft_id=info:pmid/&rft_ieee_id=7890458&rfr_iscdi=true