Multi-Source Discriminant Subspace Alignment for Cross-Domain Speech Emotion Recognition

Cross-domain speech emotion recognition (SER) is an effective strategy to improve the generalization ability of emotion classification models, which is an important research direction in speech signal processing. However, since the speech signals are non-stationary, it is difficult to train a robust...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2023-01, Vol.31, p.1-13
Hauptverfasser: Li, Shaokai, Song, Peng, Zheng, Wenming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Cross-domain speech emotion recognition (SER) is an effective strategy to improve the generalization ability of emotion classification models, which is an important research direction in speech signal processing. However, since the speech signals are non-stationary, it is difficult to train a robust classifier from single-source emotional corpus. To solve this shortcoming, we propose a novel method named multi-source discriminant subspace alignment (MDSA) for cross-domain SER. In MDSA, we first conduct linear discriminant analysis (LDA) in the multi-source domain. Then, the instances in the multi-source discriminant subspace are used to linearly reconstruct the instances in the target subspace. At the same time, the reconstruction contribution of each source discriminant subspace is determined by adaptive weights. Furthermore, the multi-source discriminant subspace is aligned by reducing the loss between projections, which can make our model more robust. In this way, MDSA considers both the alignment of cross-domain data distribution and the structural information of cross-domain instances. Finally, extensive experiments are conducted on five standard emotional corpora, i.e., Berlin, IEMOCAP, CVE, EMOVO, and TESS, and the results demonstrate the proposed MDSA is superior to several state-of-the-art transfer learning algorithms in terms of performance. The codes are available at https://github.com/shaokai1209/MDSA.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2023.3288415