Visual Object Multimodality Tracking Based on Correlation Filters for Edge Computing

In recent years, visual object tracking has become a very active research field which is mainly divided into the correlation filter-based tracking and deep learning (e.g., deep convolutional neural network and Siamese neural network) based tracking. For target tracking algorithms based on deep learn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Security and communication networks 2020, Vol.2020 (2020), p.1-13
Hauptverfasser: Yang, Guosheng, Wei, Qisheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 13
container_issue 2020
container_start_page 1
container_title Security and communication networks
container_volume 2020
creator Yang, Guosheng
Wei, Qisheng
description In recent years, visual object tracking has become a very active research field which is mainly divided into the correlation filter-based tracking and deep learning (e.g., deep convolutional neural network and Siamese neural network) based tracking. For target tracking algorithms based on deep learning, a large amount of computation is required, usually deployed on expensive graphics cards. However, for the rich monitoring devices in the Internet of Things, it is difficult to capture all the moving targets in each device in real time, so it is necessary to perform hierarchical processing and use tracking based on correlation filtering in insensitive areas to alleviate the local computing pressure. In sensitive areas, upload the video stream to a cloud computing platform with a faster computing speed to perform an algorithm based on deep features. In this paper, we mainly focus on the correlation filter-based tracking. In the correlation filter-based tracking, the discriminative scale space tracker (DSST) is one of the most popular and typical ones which is successfully applied to many application fields. However, there are still some improvements that need to be further studied for DSST. One is that the algorithms do not consider the target rotation on purpose. The other is that it is a very heavy computational load to extract the histogram of oriented gradient (HOG) features from too many patches centered at the target position in order to ensure the scale estimation accuracy. To address these two problems, we introduce the alterable patch number for target scale tracking and the space searching for target rotation tracking into the standard DSST tracking method and propose a visual object multimodality tracker based on correlation filters (MTCF) to simultaneously cope with translation, scale, and rotation in plane for the tracked target and to obtain the target information of position, scale, and attitude angle at the same time. Finally, in Visual Tracker Benchmark data set, the experiments are performed on the proposed algorithms to show their effectiveness in multimodality tracking.
doi_str_mv 10.1155/2020/8891035
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2474917459</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2474917459</sourcerecordid><originalsourceid>FETCH-LOGICAL-c317t-513726deded20318bbe2fbe53e02474fea9fa1617fba743c6ce5d32e3ec67da3</originalsourceid><addsrcrecordid>eNqF0M9LwzAUB_AgCs7pzbMEPGpdXtKfRy2bCpNditeSti8zs2tnkiL7783o0KPkkAf5vLzHl5BrYA8AUTTjjLNZmmbARHRCJpCJLGDA-elvDeE5ubB2w1gMYRJOSPGu7SBbuqo2WDv6NrROb_tGttrtaWFk_am7NX2SFhvadzTvjcFWOu3rhW4dGktVb-i8WaN_3O4G5_0lOVOytXh1vKekWMyL_CVYrp5f88dlUAtIXBCBSHjcoD-cCUirCrmqMBLIuF9OocyUhBgSVckkFHVcY9QIjgLrOGmkmJLb8dud6b8GtK7c9IPp_MTy0J9BEkaZV_ejqk1vrUFV7ozeSrMvgZWH2MpDbOUxNs_vRv6hu0Z-6__0zajRG1TyT3OWpmkofgBshnbl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2474917459</pqid></control><display><type>article</type><title>Visual Object Multimodality Tracking Based on Correlation Filters for Edge Computing</title><source>Wiley Online Library Open Access</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Alma/SFX Local Collection</source><creator>Yang, Guosheng ; Wei, Qisheng</creator><contributor>Gao, Honghao ; Honghao Gao</contributor><creatorcontrib>Yang, Guosheng ; Wei, Qisheng ; Gao, Honghao ; Honghao Gao</creatorcontrib><description>In recent years, visual object tracking has become a very active research field which is mainly divided into the correlation filter-based tracking and deep learning (e.g., deep convolutional neural network and Siamese neural network) based tracking. For target tracking algorithms based on deep learning, a large amount of computation is required, usually deployed on expensive graphics cards. However, for the rich monitoring devices in the Internet of Things, it is difficult to capture all the moving targets in each device in real time, so it is necessary to perform hierarchical processing and use tracking based on correlation filtering in insensitive areas to alleviate the local computing pressure. In sensitive areas, upload the video stream to a cloud computing platform with a faster computing speed to perform an algorithm based on deep features. In this paper, we mainly focus on the correlation filter-based tracking. In the correlation filter-based tracking, the discriminative scale space tracker (DSST) is one of the most popular and typical ones which is successfully applied to many application fields. However, there are still some improvements that need to be further studied for DSST. One is that the algorithms do not consider the target rotation on purpose. The other is that it is a very heavy computational load to extract the histogram of oriented gradient (HOG) features from too many patches centered at the target position in order to ensure the scale estimation accuracy. To address these two problems, we introduce the alterable patch number for target scale tracking and the space searching for target rotation tracking into the standard DSST tracking method and propose a visual object multimodality tracker based on correlation filters (MTCF) to simultaneously cope with translation, scale, and rotation in plane for the tracked target and to obtain the target information of position, scale, and attitude angle at the same time. Finally, in Visual Tracker Benchmark data set, the experiments are performed on the proposed algorithms to show their effectiveness in multimodality tracking.</description><identifier>ISSN: 1939-0114</identifier><identifier>EISSN: 1939-0122</identifier><identifier>DOI: 10.1155/2020/8891035</identifier><language>eng</language><publisher>Cairo, Egypt: Hindawi Publishing Corporation</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Cloud computing ; Correlation ; Deep learning ; Edge computing ; Feature extraction ; Histograms ; Internet of Things ; Machine learning ; Moving targets ; Neural networks ; Optical tracking ; Rotation ; Video data</subject><ispartof>Security and communication networks, 2020, Vol.2020 (2020), p.1-13</ispartof><rights>Copyright © 2020 Guosheng Yang and Qisheng Wei.</rights><rights>Copyright © 2020 Guosheng Yang and Qisheng Wei. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c317t-513726deded20318bbe2fbe53e02474fea9fa1617fba743c6ce5d32e3ec67da3</cites><orcidid>0000-0002-0926-3179 ; 0000-0001-6547-5013</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,781,785,4025,27924,27925,27926</link.rule.ids></links><search><contributor>Gao, Honghao</contributor><contributor>Honghao Gao</contributor><creatorcontrib>Yang, Guosheng</creatorcontrib><creatorcontrib>Wei, Qisheng</creatorcontrib><title>Visual Object Multimodality Tracking Based on Correlation Filters for Edge Computing</title><title>Security and communication networks</title><description>In recent years, visual object tracking has become a very active research field which is mainly divided into the correlation filter-based tracking and deep learning (e.g., deep convolutional neural network and Siamese neural network) based tracking. For target tracking algorithms based on deep learning, a large amount of computation is required, usually deployed on expensive graphics cards. However, for the rich monitoring devices in the Internet of Things, it is difficult to capture all the moving targets in each device in real time, so it is necessary to perform hierarchical processing and use tracking based on correlation filtering in insensitive areas to alleviate the local computing pressure. In sensitive areas, upload the video stream to a cloud computing platform with a faster computing speed to perform an algorithm based on deep features. In this paper, we mainly focus on the correlation filter-based tracking. In the correlation filter-based tracking, the discriminative scale space tracker (DSST) is one of the most popular and typical ones which is successfully applied to many application fields. However, there are still some improvements that need to be further studied for DSST. One is that the algorithms do not consider the target rotation on purpose. The other is that it is a very heavy computational load to extract the histogram of oriented gradient (HOG) features from too many patches centered at the target position in order to ensure the scale estimation accuracy. To address these two problems, we introduce the alterable patch number for target scale tracking and the space searching for target rotation tracking into the standard DSST tracking method and propose a visual object multimodality tracker based on correlation filters (MTCF) to simultaneously cope with translation, scale, and rotation in plane for the tracked target and to obtain the target information of position, scale, and attitude angle at the same time. Finally, in Visual Tracker Benchmark data set, the experiments are performed on the proposed algorithms to show their effectiveness in multimodality tracking.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Cloud computing</subject><subject>Correlation</subject><subject>Deep learning</subject><subject>Edge computing</subject><subject>Feature extraction</subject><subject>Histograms</subject><subject>Internet of Things</subject><subject>Machine learning</subject><subject>Moving targets</subject><subject>Neural networks</subject><subject>Optical tracking</subject><subject>Rotation</subject><subject>Video data</subject><issn>1939-0114</issn><issn>1939-0122</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RHX</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNqF0M9LwzAUB_AgCs7pzbMEPGpdXtKfRy2bCpNditeSti8zs2tnkiL7783o0KPkkAf5vLzHl5BrYA8AUTTjjLNZmmbARHRCJpCJLGDA-elvDeE5ubB2w1gMYRJOSPGu7SBbuqo2WDv6NrROb_tGttrtaWFk_am7NX2SFhvadzTvjcFWOu3rhW4dGktVb-i8WaN_3O4G5_0lOVOytXh1vKekWMyL_CVYrp5f88dlUAtIXBCBSHjcoD-cCUirCrmqMBLIuF9OocyUhBgSVckkFHVcY9QIjgLrOGmkmJLb8dud6b8GtK7c9IPp_MTy0J9BEkaZV_ejqk1vrUFV7ozeSrMvgZWH2MpDbOUxNs_vRv6hu0Z-6__0zajRG1TyT3OWpmkofgBshnbl</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Yang, Guosheng</creator><creator>Wei, Qisheng</creator><general>Hindawi Publishing Corporation</general><general>Hindawi</general><general>Hindawi Limited</general><scope>ADJCN</scope><scope>AHFXO</scope><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-0926-3179</orcidid><orcidid>https://orcid.org/0000-0001-6547-5013</orcidid></search><sort><creationdate>2020</creationdate><title>Visual Object Multimodality Tracking Based on Correlation Filters for Edge Computing</title><author>Yang, Guosheng ; Wei, Qisheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c317t-513726deded20318bbe2fbe53e02474fea9fa1617fba743c6ce5d32e3ec67da3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Cloud computing</topic><topic>Correlation</topic><topic>Deep learning</topic><topic>Edge computing</topic><topic>Feature extraction</topic><topic>Histograms</topic><topic>Internet of Things</topic><topic>Machine learning</topic><topic>Moving targets</topic><topic>Neural networks</topic><topic>Optical tracking</topic><topic>Rotation</topic><topic>Video data</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Guosheng</creatorcontrib><creatorcontrib>Wei, Qisheng</creatorcontrib><collection>الدوريات العلمية والإحصائية - e-Marefa Academic and Statistical Periodicals</collection><collection>معرفة - المحتوى العربي الأكاديمي المتكامل - e-Marefa Academic Complete</collection><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access Journals</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Security and communication networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Guosheng</au><au>Wei, Qisheng</au><au>Gao, Honghao</au><au>Honghao Gao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visual Object Multimodality Tracking Based on Correlation Filters for Edge Computing</atitle><jtitle>Security and communication networks</jtitle><date>2020</date><risdate>2020</risdate><volume>2020</volume><issue>2020</issue><spage>1</spage><epage>13</epage><pages>1-13</pages><issn>1939-0114</issn><eissn>1939-0122</eissn><abstract>In recent years, visual object tracking has become a very active research field which is mainly divided into the correlation filter-based tracking and deep learning (e.g., deep convolutional neural network and Siamese neural network) based tracking. For target tracking algorithms based on deep learning, a large amount of computation is required, usually deployed on expensive graphics cards. However, for the rich monitoring devices in the Internet of Things, it is difficult to capture all the moving targets in each device in real time, so it is necessary to perform hierarchical processing and use tracking based on correlation filtering in insensitive areas to alleviate the local computing pressure. In sensitive areas, upload the video stream to a cloud computing platform with a faster computing speed to perform an algorithm based on deep features. In this paper, we mainly focus on the correlation filter-based tracking. In the correlation filter-based tracking, the discriminative scale space tracker (DSST) is one of the most popular and typical ones which is successfully applied to many application fields. However, there are still some improvements that need to be further studied for DSST. One is that the algorithms do not consider the target rotation on purpose. The other is that it is a very heavy computational load to extract the histogram of oriented gradient (HOG) features from too many patches centered at the target position in order to ensure the scale estimation accuracy. To address these two problems, we introduce the alterable patch number for target scale tracking and the space searching for target rotation tracking into the standard DSST tracking method and propose a visual object multimodality tracker based on correlation filters (MTCF) to simultaneously cope with translation, scale, and rotation in plane for the tracked target and to obtain the target information of position, scale, and attitude angle at the same time. Finally, in Visual Tracker Benchmark data set, the experiments are performed on the proposed algorithms to show their effectiveness in multimodality tracking.</abstract><cop>Cairo, Egypt</cop><pub>Hindawi Publishing Corporation</pub><doi>10.1155/2020/8891035</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-0926-3179</orcidid><orcidid>https://orcid.org/0000-0001-6547-5013</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1939-0114
ispartof Security and communication networks, 2020, Vol.2020 (2020), p.1-13
issn 1939-0114
1939-0122
language eng
recordid cdi_proquest_journals_2474917459
source Wiley Online Library Open Access; EZB-FREE-00999 freely available EZB journals; Alma/SFX Local Collection
subjects Accuracy
Algorithms
Artificial neural networks
Cloud computing
Correlation
Deep learning
Edge computing
Feature extraction
Histograms
Internet of Things
Machine learning
Moving targets
Neural networks
Optical tracking
Rotation
Video data
title Visual Object Multimodality Tracking Based on Correlation Filters for Edge Computing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T15%3A04%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visual%20Object%20Multimodality%20Tracking%20Based%20on%20Correlation%20Filters%20for%20Edge%20Computing&rft.jtitle=Security%20and%20communication%20networks&rft.au=Yang,%20Guosheng&rft.date=2020&rft.volume=2020&rft.issue=2020&rft.spage=1&rft.epage=13&rft.pages=1-13&rft.issn=1939-0114&rft.eissn=1939-0122&rft_id=info:doi/10.1155/2020/8891035&rft_dat=%3Cproquest_cross%3E2474917459%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2474917459&rft_id=info:pmid/&rfr_iscdi=true