Interpretation enhancement method based on attention mechanism

The invention discloses an interpretation enhancement method based on an attention mechanism, and relates to the field of reinforcement learning. According to the interpretation enhancement method based on the attention mechanism, a deep reinforcement learning interpretation enhancement module (IEMA...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: ZHOU XIANZHONG, ZHU ZHAOQUAN, SUN YUXIANG, GAO BO
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ZHOU XIANZHONG
ZHU ZHAOQUAN
SUN YUXIANG
GAO BO
description The invention discloses an interpretation enhancement method based on an attention mechanism, and relates to the field of reinforcement learning. According to the interpretation enhancement method based on the attention mechanism, a deep reinforcement learning interpretation enhancement module (IEMA) is included, and the deep reinforcement learning interpretation enhancement module (IEMA) comprises a channel attention module and a space attention module; the channel attention module specifically comprises the steps that global pooling is carried out on an input graph, and then the weight of an original input feature graph is obtained and acts on original input; and the space attention module specifically comprises the steps of pooling a weighted input image in a channel direction, and then carrying out convolution to obtain a space attention weight to act on input. According to the interpretation enhancement method based on the attention mechanism, a weighted feature map is obtained through combination of two
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN117610638A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN117610638A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN117610638A3</originalsourceid><addsrcrecordid>eNrjZLDzzCtJLSooSi1JLMnMz1NIzctIzEtOzU3NK1HITS3JyE9RSEosTk1RAMollpQAhUGqclOTgcoyi3N5GFjTEnOKU3mhNDeDoptriLOHbmpBfnxqcUFicmpeakm8s5-hobmZoYGZsYWjMTFqAKn3MS0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Interpretation enhancement method based on attention mechanism</title><source>esp@cenet</source><creator>ZHOU XIANZHONG ; ZHU ZHAOQUAN ; SUN YUXIANG ; GAO BO</creator><creatorcontrib>ZHOU XIANZHONG ; ZHU ZHAOQUAN ; SUN YUXIANG ; GAO BO</creatorcontrib><description>The invention discloses an interpretation enhancement method based on an attention mechanism, and relates to the field of reinforcement learning. According to the interpretation enhancement method based on the attention mechanism, a deep reinforcement learning interpretation enhancement module (IEMA) is included, and the deep reinforcement learning interpretation enhancement module (IEMA) comprises a channel attention module and a space attention module; the channel attention module specifically comprises the steps that global pooling is carried out on an input graph, and then the weight of an original input feature graph is obtained and acts on original input; and the space attention module specifically comprises the steps of pooling a weighted input image in a channel direction, and then carrying out convolution to obtain a space attention weight to act on input. According to the interpretation enhancement method based on the attention mechanism, a weighted feature map is obtained through combination of two</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240227&amp;DB=EPODOC&amp;CC=CN&amp;NR=117610638A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240227&amp;DB=EPODOC&amp;CC=CN&amp;NR=117610638A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ZHOU XIANZHONG</creatorcontrib><creatorcontrib>ZHU ZHAOQUAN</creatorcontrib><creatorcontrib>SUN YUXIANG</creatorcontrib><creatorcontrib>GAO BO</creatorcontrib><title>Interpretation enhancement method based on attention mechanism</title><description>The invention discloses an interpretation enhancement method based on an attention mechanism, and relates to the field of reinforcement learning. According to the interpretation enhancement method based on the attention mechanism, a deep reinforcement learning interpretation enhancement module (IEMA) is included, and the deep reinforcement learning interpretation enhancement module (IEMA) comprises a channel attention module and a space attention module; the channel attention module specifically comprises the steps that global pooling is carried out on an input graph, and then the weight of an original input feature graph is obtained and acts on original input; and the space attention module specifically comprises the steps of pooling a weighted input image in a channel direction, and then carrying out convolution to obtain a space attention weight to act on input. According to the interpretation enhancement method based on the attention mechanism, a weighted feature map is obtained through combination of two</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLDzzCtJLSooSi1JLMnMz1NIzctIzEtOzU3NK1HITS3JyE9RSEosTk1RAMollpQAhUGqclOTgcoyi3N5GFjTEnOKU3mhNDeDoptriLOHbmpBfnxqcUFicmpeakm8s5-hobmZoYGZsYWjMTFqAKn3MS0</recordid><startdate>20240227</startdate><enddate>20240227</enddate><creator>ZHOU XIANZHONG</creator><creator>ZHU ZHAOQUAN</creator><creator>SUN YUXIANG</creator><creator>GAO BO</creator><scope>EVB</scope></search><sort><creationdate>20240227</creationdate><title>Interpretation enhancement method based on attention mechanism</title><author>ZHOU XIANZHONG ; ZHU ZHAOQUAN ; SUN YUXIANG ; GAO BO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN117610638A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>ZHOU XIANZHONG</creatorcontrib><creatorcontrib>ZHU ZHAOQUAN</creatorcontrib><creatorcontrib>SUN YUXIANG</creatorcontrib><creatorcontrib>GAO BO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ZHOU XIANZHONG</au><au>ZHU ZHAOQUAN</au><au>SUN YUXIANG</au><au>GAO BO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Interpretation enhancement method based on attention mechanism</title><date>2024-02-27</date><risdate>2024</risdate><abstract>The invention discloses an interpretation enhancement method based on an attention mechanism, and relates to the field of reinforcement learning. According to the interpretation enhancement method based on the attention mechanism, a deep reinforcement learning interpretation enhancement module (IEMA) is included, and the deep reinforcement learning interpretation enhancement module (IEMA) comprises a channel attention module and a space attention module; the channel attention module specifically comprises the steps that global pooling is carried out on an input graph, and then the weight of an original input feature graph is obtained and acts on original input; and the space attention module specifically comprises the steps of pooling a weighted input image in a channel direction, and then carrying out convolution to obtain a space attention weight to act on input. According to the interpretation enhancement method based on the attention mechanism, a weighted feature map is obtained through combination of two</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN117610638A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title Interpretation enhancement method based on attention mechanism
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T16%3A43%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ZHOU%20XIANZHONG&rft.date=2024-02-27&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN117610638A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true