Acoustic source classification using hyperset of fused voice biometric and spatial features

A method includes extracting, from multiple microphone input, a hyperset of features of acoustic sources, using the extracted features to identify separable clusters associated with acoustic scenarios, and classifying subsequent input as one of the acoustic scenarios using the hyperset of features....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Samuel P. Ebenezer, Ghassan Maalouli
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A method includes extracting, from multiple microphone input, a hyperset of features of acoustic sources, using the extracted features to identify separable clusters associated with acoustic scenarios, and classifying subsequent input as one of the acoustic scenarios using the hyperset of features. The acoustic scenarios include a desired spatially moving/non-moving talker, and an undesired spatially moving/non-moving acoustic source. The hyperset of features includes both spatial and voice biometric features. The classified acoustic scenario may be used in a robotics application or voice assistant device desired speech enhancement or interference signal cancellation. Specifically, the classification of the acoustic scenarios can be used to adapt a beamformer, e.g., step size adjustment. The hyperset of features may also include visual biometric features extracted from one or more cameras viewing the acoustic sources. The spatial and biometric features may be separately extracted, clustered, classified and their separate classifications fused, e.g., using frame synchronization.