Single target tracking method based on example attention mechanism

The invention discloses a single target tracking method based on an example attention mechanism. The method comprises the following steps: S1, obtaining a deep fusion feature map of a template image and a search image; s2, calculating instance-level self-attention of the deep fusion feature map, and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: XIAO XIANBING, CHEN ZHEN, LIU JUN, MENG FANQIN, XIONG XINGZHONG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator XIAO XIANBING
CHEN ZHEN
LIU JUN
MENG FANQIN
XIONG XINGZHONG
description The invention discloses a single target tracking method based on an example attention mechanism. The method comprises the following steps: S1, obtaining a deep fusion feature map of a template image and a search image; s2, calculating instance-level self-attention of the deep fusion feature map, and obtaining a response map; and S3, performing target positioning and bounding box regression according to the obtained response diagram. According to the method, a popular attention mechanism and a twin network structure in the current visual target tracking field are combined, meanwhile, a lightweight backbone network is adopted, template features and search features are fully fused through a pixel-level feature fusion module, a multi-channel feature map is abstracted into feature vectors by using adaptive maximum pooling and adaptive average pooling, and the feature vectors are extracted into the target tracking algorithm. And data set information is packaged into example representation, and information loss caus
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN117115202A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN117115202A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN117115202A3</originalsourceid><addsrcrecordid>eNrjZHAKzsxLz0lVKEksSk8tUSgpSkzOBooo5KaWZOSnKCQlFqemKOTnKaRWJOYWANUllpSk5pVkAkVyU5MzEvMyi3N5GFjTEnOKU3mhNDeDoptriLOHbmpBfnxqcUFicmpeakm8s5-hobmhoamRgZGjMTFqAEyDMiE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Single target tracking method based on example attention mechanism</title><source>esp@cenet</source><creator>XIAO XIANBING ; CHEN ZHEN ; LIU JUN ; MENG FANQIN ; XIONG XINGZHONG</creator><creatorcontrib>XIAO XIANBING ; CHEN ZHEN ; LIU JUN ; MENG FANQIN ; XIONG XINGZHONG</creatorcontrib><description>The invention discloses a single target tracking method based on an example attention mechanism. The method comprises the following steps: S1, obtaining a deep fusion feature map of a template image and a search image; s2, calculating instance-level self-attention of the deep fusion feature map, and obtaining a response map; and S3, performing target positioning and bounding box regression according to the obtained response diagram. According to the method, a popular attention mechanism and a twin network structure in the current visual target tracking field are combined, meanwhile, a lightweight backbone network is adopted, template features and search features are fully fused through a pixel-level feature fusion module, a multi-channel feature map is abstracted into feature vectors by using adaptive maximum pooling and adaptive average pooling, and the feature vectors are extracted into the target tracking algorithm. And data set information is packaged into example representation, and information loss caus</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20231124&amp;DB=EPODOC&amp;CC=CN&amp;NR=117115202A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20231124&amp;DB=EPODOC&amp;CC=CN&amp;NR=117115202A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>XIAO XIANBING</creatorcontrib><creatorcontrib>CHEN ZHEN</creatorcontrib><creatorcontrib>LIU JUN</creatorcontrib><creatorcontrib>MENG FANQIN</creatorcontrib><creatorcontrib>XIONG XINGZHONG</creatorcontrib><title>Single target tracking method based on example attention mechanism</title><description>The invention discloses a single target tracking method based on an example attention mechanism. The method comprises the following steps: S1, obtaining a deep fusion feature map of a template image and a search image; s2, calculating instance-level self-attention of the deep fusion feature map, and obtaining a response map; and S3, performing target positioning and bounding box regression according to the obtained response diagram. According to the method, a popular attention mechanism and a twin network structure in the current visual target tracking field are combined, meanwhile, a lightweight backbone network is adopted, template features and search features are fully fused through a pixel-level feature fusion module, a multi-channel feature map is abstracted into feature vectors by using adaptive maximum pooling and adaptive average pooling, and the feature vectors are extracted into the target tracking algorithm. And data set information is packaged into example representation, and information loss caus</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHAKzsxLz0lVKEksSk8tUSgpSkzOBooo5KaWZOSnKCQlFqemKOTnKaRWJOYWANUllpSk5pVkAkVyU5MzEvMyi3N5GFjTEnOKU3mhNDeDoptriLOHbmpBfnxqcUFicmpeakm8s5-hobmhoamRgZGjMTFqAEyDMiE</recordid><startdate>20231124</startdate><enddate>20231124</enddate><creator>XIAO XIANBING</creator><creator>CHEN ZHEN</creator><creator>LIU JUN</creator><creator>MENG FANQIN</creator><creator>XIONG XINGZHONG</creator><scope>EVB</scope></search><sort><creationdate>20231124</creationdate><title>Single target tracking method based on example attention mechanism</title><author>XIAO XIANBING ; CHEN ZHEN ; LIU JUN ; MENG FANQIN ; XIONG XINGZHONG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN117115202A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>XIAO XIANBING</creatorcontrib><creatorcontrib>CHEN ZHEN</creatorcontrib><creatorcontrib>LIU JUN</creatorcontrib><creatorcontrib>MENG FANQIN</creatorcontrib><creatorcontrib>XIONG XINGZHONG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>XIAO XIANBING</au><au>CHEN ZHEN</au><au>LIU JUN</au><au>MENG FANQIN</au><au>XIONG XINGZHONG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Single target tracking method based on example attention mechanism</title><date>2023-11-24</date><risdate>2023</risdate><abstract>The invention discloses a single target tracking method based on an example attention mechanism. The method comprises the following steps: S1, obtaining a deep fusion feature map of a template image and a search image; s2, calculating instance-level self-attention of the deep fusion feature map, and obtaining a response map; and S3, performing target positioning and bounding box regression according to the obtained response diagram. According to the method, a popular attention mechanism and a twin network structure in the current visual target tracking field are combined, meanwhile, a lightweight backbone network is adopted, template features and search features are fully fused through a pixel-level feature fusion module, a multi-channel feature map is abstracted into feature vectors by using adaptive maximum pooling and adaptive average pooling, and the feature vectors are extracted into the target tracking algorithm. And data set information is packaged into example representation, and information loss caus</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN117115202A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
title Single target tracking method based on example attention mechanism
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T19%3A26%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=XIAO%20XIANBING&rft.date=2023-11-24&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN117115202A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true