Self-learning emotion interaction method based on multi-modal recognition
The invention discloses a self-learning emotion interaction method based on multi-modal recognition. The method comprises the following steps: respectively collecting voice, face and gesture signals by a non-contact channel; performing feature extraction on the signals to obtain preliminary features...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The invention discloses a self-learning emotion interaction method based on multi-modal recognition. The method comprises the following steps: respectively collecting voice, face and gesture signals by a non-contact channel; performing feature extraction on the signals to obtain preliminary features of the signals; inputting the features into a bidirectional LSTM layer to obtain single-mode private information and multi-mode interaction information, and obtaining fusion features according to the information; predicting the emotion of a user based on a classification learning algorithm in combination with the multi-modal fusion features and a historical emotional state curve, and selecting an interaction mode; in the interaction mode, giving an interaction response according to the dialoguememory network; and finally, feeding back and optimizing the emotional state curve and the dialogue memory network according to the interaction effect. According to the method, an operator is allowedto input information thro |
---|