Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker

The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2018-04, Vol.27 (4), p.2022-2037
Hauptverfasser: Xiangyuan Lan, Shengping Zhang, Yuen, Pong C., Chellappa, Rama
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2037
container_issue 4
container_start_page 2022
container_title IEEE transactions on image processing
container_volume 27
creator Xiangyuan Lan
Shengping Zhang
Yuen, Pong C.
Chellappa, Rama
description The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.
doi_str_mv 10.1109/TIP.2017.2777183
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_29989985</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8119524</ieee_id><sourcerecordid>2068352045</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-6b11d72cf7d848083bd6c2dd1bc9d77cb02500df7253c96959086338313e73a63</originalsourceid><addsrcrecordid>eNo9kE1r3DAQhkVJ6Cbb3AuF4mMu3sxIliXllixNG9ikod2cjSyNi1t_RbID_ffxspuFgRmY530PD2OfEVaIYK62908rDqhWXCmFWnxgZ2gyTAEyfjLfIFWqMDMLdh7jXwDMJOYf2YIbo-eRZ8xtyIau7v4k675t-y6xnU_uyI5ToPT3QK6uapc82XGk0MXr5CZ57F-pSR6mZqyHZsfYECn9RUOgSN1ox7rv0lsbySfbYN0_Cp_YaWWbSBeHvWTPd9-26x_p5uf3-_XNJnUCzZjmJaJX3FXK60yDFqXPHfceS2e8Uq4ELgF8pbgUzuRGGtC5EFqgICVsLpbsct87hP5lojgWbR0dNY3tqJ9iwSHXQnLI5IzCHnWhjzFQVQyhbm34XyAUO7XFrLbYqS0OaufI10P7VLbkj4F3lzPwZQ_URHR8a0QjeSbeAMVKfJc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2068352045</pqid></control><display><type>article</type><title>Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker</title><source>IEEE Electronic Library (IEL)</source><creator>Xiangyuan Lan ; Shengping Zhang ; Yuen, Pong C. ; Chellappa, Rama</creator><creatorcontrib>Xiangyuan Lan ; Shengping Zhang ; Yuen, Pong C. ; Chellappa, Rama</creatorcontrib><description>The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2017.2777183</identifier><identifier>PMID: 29989985</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adaptation models ; Electronic mail ; Feature extraction ; feature fusion ; Lighting ; Measurement ; metric learning ; Robustness ; sparse representation ; Visual tracking ; Visualization</subject><ispartof>IEEE transactions on image processing, 2018-04, Vol.27 (4), p.2022-2037</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-6b11d72cf7d848083bd6c2dd1bc9d77cb02500df7253c96959086338313e73a63</citedby><cites>FETCH-LOGICAL-c319t-6b11d72cf7d848083bd6c2dd1bc9d77cb02500df7253c96959086338313e73a63</cites><orcidid>0000-0001-8564-0346</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8119524$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8119524$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29989985$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Xiangyuan Lan</creatorcontrib><creatorcontrib>Shengping Zhang</creatorcontrib><creatorcontrib>Yuen, Pong C.</creatorcontrib><creatorcontrib>Chellappa, Rama</creatorcontrib><title>Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.</description><subject>Adaptation models</subject><subject>Electronic mail</subject><subject>Feature extraction</subject><subject>feature fusion</subject><subject>Lighting</subject><subject>Measurement</subject><subject>metric learning</subject><subject>Robustness</subject><subject>sparse representation</subject><subject>Visual tracking</subject><subject>Visualization</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1r3DAQhkVJ6Cbb3AuF4mMu3sxIliXllixNG9ikod2cjSyNi1t_RbID_ffxspuFgRmY530PD2OfEVaIYK62908rDqhWXCmFWnxgZ2gyTAEyfjLfIFWqMDMLdh7jXwDMJOYf2YIbo-eRZ8xtyIau7v4k675t-y6xnU_uyI5ToPT3QK6uapc82XGk0MXr5CZ57F-pSR6mZqyHZsfYECn9RUOgSN1ox7rv0lsbySfbYN0_Cp_YaWWbSBeHvWTPd9-26x_p5uf3-_XNJnUCzZjmJaJX3FXK60yDFqXPHfceS2e8Uq4ELgF8pbgUzuRGGtC5EFqgICVsLpbsct87hP5lojgWbR0dNY3tqJ9iwSHXQnLI5IzCHnWhjzFQVQyhbm34XyAUO7XFrLbYqS0OaufI10P7VLbkj4F3lzPwZQ_URHR8a0QjeSbeAMVKfJc</recordid><startdate>201804</startdate><enddate>201804</enddate><creator>Xiangyuan Lan</creator><creator>Shengping Zhang</creator><creator>Yuen, Pong C.</creator><creator>Chellappa, Rama</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-8564-0346</orcidid></search><sort><creationdate>201804</creationdate><title>Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker</title><author>Xiangyuan Lan ; Shengping Zhang ; Yuen, Pong C. ; Chellappa, Rama</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-6b11d72cf7d848083bd6c2dd1bc9d77cb02500df7253c96959086338313e73a63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Adaptation models</topic><topic>Electronic mail</topic><topic>Feature extraction</topic><topic>feature fusion</topic><topic>Lighting</topic><topic>Measurement</topic><topic>metric learning</topic><topic>Robustness</topic><topic>sparse representation</topic><topic>Visual tracking</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xiangyuan Lan</creatorcontrib><creatorcontrib>Shengping Zhang</creatorcontrib><creatorcontrib>Yuen, Pong C.</creatorcontrib><creatorcontrib>Chellappa, Rama</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xiangyuan Lan</au><au>Shengping Zhang</au><au>Yuen, Pong C.</au><au>Chellappa, Rama</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2018-04</date><risdate>2018</risdate><volume>27</volume><issue>4</issue><spage>2022</spage><epage>2037</epage><pages>2022-2037</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>29989985</pmid><doi>10.1109/TIP.2017.2777183</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-8564-0346</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2018-04, Vol.27 (4), p.2022-2037
issn 1057-7149
1941-0042
language eng
recordid cdi_pubmed_primary_29989985
source IEEE Electronic Library (IEL)
subjects Adaptation models
Electronic mail
Feature extraction
feature fusion
Lighting
Measurement
metric learning
Robustness
sparse representation
Visual tracking
Visualization
title Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T02%3A06%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Common%20and%20Feature-Specific%20Patterns:%20A%20Novel%20Multiple-Sparse-Representation-Based%20Tracker&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Xiangyuan%20Lan&rft.date=2018-04&rft.volume=27&rft.issue=4&rft.spage=2022&rft.epage=2037&rft.pages=2022-2037&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2017.2777183&rft_dat=%3Cproquest_RIE%3E2068352045%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2068352045&rft_id=info:pmid/29989985&rft_ieee_id=8119524&rfr_iscdi=true