Aspect-level multi-modal sentiment analysis method based on double channels and attention mechanism

The invention relates to an aspect-level multi-mode sentiment analysis method based on two channels and an attention mechanism, which is characterized in that on the basis of a neural network, sentiment information contained in image features is extracted in a multi-scale manner by combining aspect...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: XU LU, LIANG YAN, HOU ZENGHUI, YIN ENTONG, CHEN SIXU
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator XU LU
LIANG YAN
HOU ZENGHUI
YIN ENTONG
CHEN SIXU
description The invention relates to an aspect-level multi-mode sentiment analysis method based on two channels and an attention mechanism, which is characterized in that on the basis of a neural network, sentiment information contained in image features is extracted in a multi-scale manner by combining aspect word features and text features with the attention mechanism, and a GCN network is introduced into an aspect-level multi-mode sentiment analysis task, so that the sentiment analysis efficiency is improved. And the feature extraction and interactive fusion capabilities of the model are greatly improved. According to the method, aspect words, text features and image features are extracted in a feature extraction layer by adopting a pre-training encoder, and final aspect word feature and sentence feature representation is obtained after bidirectional fusion of aspect word and sentence features in an attention mechanism layer. An image feature extraction network is established for image features through a channel atten
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN116662924A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN116662924A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN116662924A3</originalsourceid><addsrcrecordid>eNqNizEKwkAQRdNYiHqH8QApEiVgGYJiZWUfJrtfsjC7G5iJ4O1NwAPY_Fe897eFa3WCs1LwhlCcxUIZs2chRbIQlyFOLB8NShE2Zk8DKzzlRD7Pg4DcyClBdAk9sdl6XGzEKoLGfbF5sSgOP-6K4-367O4lptxDJ3ZIsL57VFXTNPWlPrenf5ovy4o-2w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Aspect-level multi-modal sentiment analysis method based on double channels and attention mechanism</title><source>esp@cenet</source><creator>XU LU ; LIANG YAN ; HOU ZENGHUI ; YIN ENTONG ; CHEN SIXU</creator><creatorcontrib>XU LU ; LIANG YAN ; HOU ZENGHUI ; YIN ENTONG ; CHEN SIXU</creatorcontrib><description>The invention relates to an aspect-level multi-mode sentiment analysis method based on two channels and an attention mechanism, which is characterized in that on the basis of a neural network, sentiment information contained in image features is extracted in a multi-scale manner by combining aspect word features and text features with the attention mechanism, and a GCN network is introduced into an aspect-level multi-mode sentiment analysis task, so that the sentiment analysis efficiency is improved. And the feature extraction and interactive fusion capabilities of the model are greatly improved. According to the method, aspect words, text features and image features are extracted in a feature extraction layer by adopting a pre-training encoder, and final aspect word feature and sentence feature representation is obtained after bidirectional fusion of aspect word and sentence features in an attention mechanism layer. An image feature extraction network is established for image features through a channel atten</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230829&amp;DB=EPODOC&amp;CC=CN&amp;NR=116662924A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230829&amp;DB=EPODOC&amp;CC=CN&amp;NR=116662924A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>XU LU</creatorcontrib><creatorcontrib>LIANG YAN</creatorcontrib><creatorcontrib>HOU ZENGHUI</creatorcontrib><creatorcontrib>YIN ENTONG</creatorcontrib><creatorcontrib>CHEN SIXU</creatorcontrib><title>Aspect-level multi-modal sentiment analysis method based on double channels and attention mechanism</title><description>The invention relates to an aspect-level multi-mode sentiment analysis method based on two channels and an attention mechanism, which is characterized in that on the basis of a neural network, sentiment information contained in image features is extracted in a multi-scale manner by combining aspect word features and text features with the attention mechanism, and a GCN network is introduced into an aspect-level multi-mode sentiment analysis task, so that the sentiment analysis efficiency is improved. And the feature extraction and interactive fusion capabilities of the model are greatly improved. According to the method, aspect words, text features and image features are extracted in a feature extraction layer by adopting a pre-training encoder, and final aspect word feature and sentence feature representation is obtained after bidirectional fusion of aspect word and sentence features in an attention mechanism layer. An image feature extraction network is established for image features through a channel atten</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNizEKwkAQRdNYiHqH8QApEiVgGYJiZWUfJrtfsjC7G5iJ4O1NwAPY_Fe897eFa3WCs1LwhlCcxUIZs2chRbIQlyFOLB8NShE2Zk8DKzzlRD7Pg4DcyClBdAk9sdl6XGzEKoLGfbF5sSgOP-6K4-367O4lptxDJ3ZIsL57VFXTNPWlPrenf5ovy4o-2w</recordid><startdate>20230829</startdate><enddate>20230829</enddate><creator>XU LU</creator><creator>LIANG YAN</creator><creator>HOU ZENGHUI</creator><creator>YIN ENTONG</creator><creator>CHEN SIXU</creator><scope>EVB</scope></search><sort><creationdate>20230829</creationdate><title>Aspect-level multi-modal sentiment analysis method based on double channels and attention mechanism</title><author>XU LU ; LIANG YAN ; HOU ZENGHUI ; YIN ENTONG ; CHEN SIXU</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN116662924A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>XU LU</creatorcontrib><creatorcontrib>LIANG YAN</creatorcontrib><creatorcontrib>HOU ZENGHUI</creatorcontrib><creatorcontrib>YIN ENTONG</creatorcontrib><creatorcontrib>CHEN SIXU</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>XU LU</au><au>LIANG YAN</au><au>HOU ZENGHUI</au><au>YIN ENTONG</au><au>CHEN SIXU</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Aspect-level multi-modal sentiment analysis method based on double channels and attention mechanism</title><date>2023-08-29</date><risdate>2023</risdate><abstract>The invention relates to an aspect-level multi-mode sentiment analysis method based on two channels and an attention mechanism, which is characterized in that on the basis of a neural network, sentiment information contained in image features is extracted in a multi-scale manner by combining aspect word features and text features with the attention mechanism, and a GCN network is introduced into an aspect-level multi-mode sentiment analysis task, so that the sentiment analysis efficiency is improved. And the feature extraction and interactive fusion capabilities of the model are greatly improved. According to the method, aspect words, text features and image features are extracted in a feature extraction layer by adopting a pre-training encoder, and final aspect word feature and sentence feature representation is obtained after bidirectional fusion of aspect word and sentence features in an attention mechanism layer. An image feature extraction network is established for image features through a channel atten</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN116662924A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title Aspect-level multi-modal sentiment analysis method based on double channels and attention mechanism
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T06%3A11%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=XU%20LU&rft.date=2023-08-29&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN116662924A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true