Emotion quantification method and device based on multi-modal features, equipment and storage medium

The invention discloses a multi-modal feature-based emotion quantification method, device and equipment and a storage medium, the audio and video data and the recognition text of a to-be-tested object in a set conversation scene are obtained, the data of the three modalities comprehensively cover th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WANG JING, ZHAO JINGHE, LIU PENGBO, WANG GANG, HE ZHIYANG, HU JIAXUE, ZHAO ZHIWEI, LI NANQIAN, FENG LEI, LU XIAOLIANG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The invention discloses a multi-modal feature-based emotion quantification method, device and equipment and a storage medium, the audio and video data and the recognition text of a to-be-tested object in a set conversation scene are obtained, the data of the three modalities comprehensively cover the overall state of the to-be-tested object, the provided information is richer, and the accuracy of emotion quantification is improved. And a good data basis is provided for accurate analysis to obtain emotion reference data. The method comprises the following steps: respectively extracting features of data of three modalities to obtain text features, audio local features and video local features, respectively carrying out dimension compression and clustering on the audio local features and the video local features by adopting a learnable clustering module to obtain more valuable high-dimensional audio global features and video global features; the text feature, the audio global feature and the video global feature