Multi-modal feature fusion method and system for sentiment analysis

The invention provides a multi-modal feature fusion method and system for sentiment analysis. According to the scheme, the method comprises the steps of obtaining text data in social data and image data corresponding to the text data; performing feature extraction on the text data and the image data...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: LIANG HU, DU WANTONG, GENG YUSHUI, ZHAO JING
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator LIANG HU
DU WANTONG
GENG YUSHUI
ZHAO JING
description The invention provides a multi-modal feature fusion method and system for sentiment analysis. According to the scheme, the method comprises the steps of obtaining text data in social data and image data corresponding to the text data; performing feature extraction on the text data and the image data to obtain text features and image features; based on the text features and the image features, obtaining interaction information between the text features and the image features by adopting a cross-modal attention mechanism, and performing noise filtering on the interaction information through a gating mechanism to obtain text features after image filtering and image features after text filtering; splicing the obtained text features, the image features, the text features after image filtering and the image features after text filtering to obtain fusion features; and based on the fusion features, obtaining an emotion analysis result through a pre-constructed emotion analysis model. 本公开提供了一种用于情感分析的多模态特征融合方法及系统,所述方案包
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN116644385A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN116644385A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN116644385A3</originalsourceid><addsrcrecordid>eNrjZHD2Lc0pydTNzU9JzFFIS00sKS1KVUgrLc7Mz1PITS3JyE9RSMxLUSiuLC5JzVVIyy9SKE7NK8nMBRJAicScyuLMYh4G1rTEnOJUXijNzaDo5hri7KGbWpAfn1pckJicmpdaEu_sZ2hoZmZiYmxh6mhMjBoAoFwy1g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Multi-modal feature fusion method and system for sentiment analysis</title><source>esp@cenet</source><creator>LIANG HU ; DU WANTONG ; GENG YUSHUI ; ZHAO JING</creator><creatorcontrib>LIANG HU ; DU WANTONG ; GENG YUSHUI ; ZHAO JING</creatorcontrib><description>The invention provides a multi-modal feature fusion method and system for sentiment analysis. According to the scheme, the method comprises the steps of obtaining text data in social data and image data corresponding to the text data; performing feature extraction on the text data and the image data to obtain text features and image features; based on the text features and the image features, obtaining interaction information between the text features and the image features by adopting a cross-modal attention mechanism, and performing noise filtering on the interaction information through a gating mechanism to obtain text features after image filtering and image features after text filtering; splicing the obtained text features, the image features, the text features after image filtering and the image features after text filtering to obtain fusion features; and based on the fusion features, obtaining an emotion analysis result through a pre-constructed emotion analysis model. 本公开提供了一种用于情感分析的多模态特征融合方法及系统,所述方案包</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230825&amp;DB=EPODOC&amp;CC=CN&amp;NR=116644385A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230825&amp;DB=EPODOC&amp;CC=CN&amp;NR=116644385A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>LIANG HU</creatorcontrib><creatorcontrib>DU WANTONG</creatorcontrib><creatorcontrib>GENG YUSHUI</creatorcontrib><creatorcontrib>ZHAO JING</creatorcontrib><title>Multi-modal feature fusion method and system for sentiment analysis</title><description>The invention provides a multi-modal feature fusion method and system for sentiment analysis. According to the scheme, the method comprises the steps of obtaining text data in social data and image data corresponding to the text data; performing feature extraction on the text data and the image data to obtain text features and image features; based on the text features and the image features, obtaining interaction information between the text features and the image features by adopting a cross-modal attention mechanism, and performing noise filtering on the interaction information through a gating mechanism to obtain text features after image filtering and image features after text filtering; splicing the obtained text features, the image features, the text features after image filtering and the image features after text filtering to obtain fusion features; and based on the fusion features, obtaining an emotion analysis result through a pre-constructed emotion analysis model. 本公开提供了一种用于情感分析的多模态特征融合方法及系统,所述方案包</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHD2Lc0pydTNzU9JzFFIS00sKS1KVUgrLc7Mz1PITS3JyE9RSMxLUSiuLC5JzVVIyy9SKE7NK8nMBRJAicScyuLMYh4G1rTEnOJUXijNzaDo5hri7KGbWpAfn1pckJicmpdaEu_sZ2hoZmZiYmxh6mhMjBoAoFwy1g</recordid><startdate>20230825</startdate><enddate>20230825</enddate><creator>LIANG HU</creator><creator>DU WANTONG</creator><creator>GENG YUSHUI</creator><creator>ZHAO JING</creator><scope>EVB</scope></search><sort><creationdate>20230825</creationdate><title>Multi-modal feature fusion method and system for sentiment analysis</title><author>LIANG HU ; DU WANTONG ; GENG YUSHUI ; ZHAO JING</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN116644385A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>LIANG HU</creatorcontrib><creatorcontrib>DU WANTONG</creatorcontrib><creatorcontrib>GENG YUSHUI</creatorcontrib><creatorcontrib>ZHAO JING</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>LIANG HU</au><au>DU WANTONG</au><au>GENG YUSHUI</au><au>ZHAO JING</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Multi-modal feature fusion method and system for sentiment analysis</title><date>2023-08-25</date><risdate>2023</risdate><abstract>The invention provides a multi-modal feature fusion method and system for sentiment analysis. According to the scheme, the method comprises the steps of obtaining text data in social data and image data corresponding to the text data; performing feature extraction on the text data and the image data to obtain text features and image features; based on the text features and the image features, obtaining interaction information between the text features and the image features by adopting a cross-modal attention mechanism, and performing noise filtering on the interaction information through a gating mechanism to obtain text features after image filtering and image features after text filtering; splicing the obtained text features, the image features, the text features after image filtering and the image features after text filtering to obtain fusion features; and based on the fusion features, obtaining an emotion analysis result through a pre-constructed emotion analysis model. 本公开提供了一种用于情感分析的多模态特征融合方法及系统,所述方案包</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN116644385A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title Multi-modal feature fusion method and system for sentiment analysis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T04%3A40%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=LIANG%20HU&rft.date=2023-08-25&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN116644385A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true