Dimensional sentiment analysis method based on joint cross attention mechanism

The invention discloses a dimension sentiment analysis method based on a joint cross attention mechanism, and the method comprises the following steps: obtaining an original video, and carrying out the preprocessing of the original video, and obtaining face image data and audio data; based on a Resn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: LIU FENG, WU SHUHUA, LIU CHANGXUAN, ZHAO ZHENGLAI
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The invention discloses a dimension sentiment analysis method based on a joint cross attention mechanism, and the method comprises the following steps: obtaining an original video, and carrying out the preprocessing of the original video, and obtaining face image data and audio data; based on a Resnet50 model and a time sequence deep convolutional neural network, performing feature extraction on the face image data to obtain a visual feature matrix; based on a VGGish model and a time sequence deep convolutional neural network, performing feature extraction on the audio data to obtain an auditory feature matrix; and inputting the visual feature matrix and the auditory feature matrix into a feature fusion module and a full connection layer of joint cross attention to obtain an analysis result. According to the method, by introducing the joint cross attention mechanism and the time sequence deep convolutional neural network, the feature extraction and multi-modal feature fusion mode is improved, the processing o