Multi-feature fusion visual dialogue sentiment analysis method of hybrid model architecture

The invention relates to the technical field of natural language processing, and provides a multi-feature fusion visual dialogue sentiment analysis method of a hybrid model architecture, which comprises the following steps of: acquiring dialogue data containing text information and video information...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WANG SHUAI, TANG WENZHONG, TANG HONGMEI, WANG YANYANG, ZHU DIXIONGXIAO
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator WANG SHUAI
TANG WENZHONG
TANG HONGMEI
WANG YANYANG
ZHU DIXIONGXIAO
description The invention relates to the technical field of natural language processing, and provides a multi-feature fusion visual dialogue sentiment analysis method of a hybrid model architecture, which comprises the following steps of: acquiring dialogue data containing text information and video information, intercepting the text information according to a statement length, collecting a face image sequence in the video information, and performing sentiment analysis on the face image sequence; obtaining preprocessed text data and image data; based on paired and grouped texts in the preprocessed text data, extracting text features of the preprocessed text data; based on the face image sequence, extracting image features of the preprocessed image data; the text features and the image features are fused, emotion classification is carried out, and emotion categories are obtained; training the sentiment analysis model to obtain a trained sentiment analysis model; according to the method, the effectiveness of conversation e
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118228156A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118228156A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118228156A3</originalsourceid><addsrcrecordid>eNqNjDEKwkAQANNYiPqH9QEpElHSSlBstLKzCGtuL1m4uw23d0J-r4IPsJlphlkWj2t2iUtLmHIksFlZArxYMzowjE6GTKAUEvsPAAO6WVnBUxrFgFgY52dkA14MOcDYj5yo_97WxcKiU9r8vCq259O9vZQ0SUc6YU-BUtfeqqqp66baH467f5o32gY8Qw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Multi-feature fusion visual dialogue sentiment analysis method of hybrid model architecture</title><source>esp@cenet</source><creator>WANG SHUAI ; TANG WENZHONG ; TANG HONGMEI ; WANG YANYANG ; ZHU DIXIONGXIAO</creator><creatorcontrib>WANG SHUAI ; TANG WENZHONG ; TANG HONGMEI ; WANG YANYANG ; ZHU DIXIONGXIAO</creatorcontrib><description>The invention relates to the technical field of natural language processing, and provides a multi-feature fusion visual dialogue sentiment analysis method of a hybrid model architecture, which comprises the following steps of: acquiring dialogue data containing text information and video information, intercepting the text information according to a statement length, collecting a face image sequence in the video information, and performing sentiment analysis on the face image sequence; obtaining preprocessed text data and image data; based on paired and grouped texts in the preprocessed text data, extracting text features of the preprocessed text data; based on the face image sequence, extracting image features of the preprocessed image data; the text features and the image features are fused, emotion classification is carried out, and emotion categories are obtained; training the sentiment analysis model to obtain a trained sentiment analysis model; according to the method, the effectiveness of conversation e</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240621&amp;DB=EPODOC&amp;CC=CN&amp;NR=118228156A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240621&amp;DB=EPODOC&amp;CC=CN&amp;NR=118228156A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>WANG SHUAI</creatorcontrib><creatorcontrib>TANG WENZHONG</creatorcontrib><creatorcontrib>TANG HONGMEI</creatorcontrib><creatorcontrib>WANG YANYANG</creatorcontrib><creatorcontrib>ZHU DIXIONGXIAO</creatorcontrib><title>Multi-feature fusion visual dialogue sentiment analysis method of hybrid model architecture</title><description>The invention relates to the technical field of natural language processing, and provides a multi-feature fusion visual dialogue sentiment analysis method of a hybrid model architecture, which comprises the following steps of: acquiring dialogue data containing text information and video information, intercepting the text information according to a statement length, collecting a face image sequence in the video information, and performing sentiment analysis on the face image sequence; obtaining preprocessed text data and image data; based on paired and grouped texts in the preprocessed text data, extracting text features of the preprocessed text data; based on the face image sequence, extracting image features of the preprocessed image data; the text features and the image features are fused, emotion classification is carried out, and emotion categories are obtained; training the sentiment analysis model to obtain a trained sentiment analysis model; according to the method, the effectiveness of conversation e</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjDEKwkAQANNYiPqH9QEpElHSSlBstLKzCGtuL1m4uw23d0J-r4IPsJlphlkWj2t2iUtLmHIksFlZArxYMzowjE6GTKAUEvsPAAO6WVnBUxrFgFgY52dkA14MOcDYj5yo_97WxcKiU9r8vCq259O9vZQ0SUc6YU-BUtfeqqqp66baH467f5o32gY8Qw</recordid><startdate>20240621</startdate><enddate>20240621</enddate><creator>WANG SHUAI</creator><creator>TANG WENZHONG</creator><creator>TANG HONGMEI</creator><creator>WANG YANYANG</creator><creator>ZHU DIXIONGXIAO</creator><scope>EVB</scope></search><sort><creationdate>20240621</creationdate><title>Multi-feature fusion visual dialogue sentiment analysis method of hybrid model architecture</title><author>WANG SHUAI ; TANG WENZHONG ; TANG HONGMEI ; WANG YANYANG ; ZHU DIXIONGXIAO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118228156A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>WANG SHUAI</creatorcontrib><creatorcontrib>TANG WENZHONG</creatorcontrib><creatorcontrib>TANG HONGMEI</creatorcontrib><creatorcontrib>WANG YANYANG</creatorcontrib><creatorcontrib>ZHU DIXIONGXIAO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>WANG SHUAI</au><au>TANG WENZHONG</au><au>TANG HONGMEI</au><au>WANG YANYANG</au><au>ZHU DIXIONGXIAO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Multi-feature fusion visual dialogue sentiment analysis method of hybrid model architecture</title><date>2024-06-21</date><risdate>2024</risdate><abstract>The invention relates to the technical field of natural language processing, and provides a multi-feature fusion visual dialogue sentiment analysis method of a hybrid model architecture, which comprises the following steps of: acquiring dialogue data containing text information and video information, intercepting the text information according to a statement length, collecting a face image sequence in the video information, and performing sentiment analysis on the face image sequence; obtaining preprocessed text data and image data; based on paired and grouped texts in the preprocessed text data, extracting text features of the preprocessed text data; based on the face image sequence, extracting image features of the preprocessed image data; the text features and the image features are fused, emotion classification is carried out, and emotion categories are obtained; training the sentiment analysis model to obtain a trained sentiment analysis model; according to the method, the effectiveness of conversation e</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118228156A
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Multi-feature fusion visual dialogue sentiment analysis method of hybrid model architecture
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T15%3A59%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=WANG%20SHUAI&rft.date=2024-06-21&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118228156A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true