Leveraging Joint Spectral and Spatial Learning with MAMBA for Multichannel Speech Enhancement
In multichannel speech enhancement, effectively capturing spatial and spectral information across different microphones is crucial for noise reduction. Traditional methods, such as CNN or LSTM, attempt to model the temporal dynamics of full-band and sub-band spectral and spatial features. However, t...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In multichannel speech enhancement, effectively capturing spatial and
spectral information across different microphones is crucial for noise
reduction. Traditional methods, such as CNN or LSTM, attempt to model the
temporal dynamics of full-band and sub-band spectral and spatial features.
However, these approaches face limitations in fully modeling complex temporal
dependencies, especially in dynamic acoustic environments. To overcome these
challenges, we modify the current advanced model McNet by introducing an
improved version of Mamba, a state-space model, and further propose MCMamba.
MCMamba has been completely reengineered to integrate full-band and narrow-band
spatial information with sub-band and full-band spectral features, providing a
more comprehensive approach to modeling spatial and spectral information. Our
experimental results demonstrate that MCMamba significantly improves the
modeling of spatial and spectral features in multichannel speech enhancement,
outperforming McNet and achieving state-of-the-art performance on the CHiME-3
dataset. Additionally, we find that Mamba performs exceptionally well in
modeling spectral information. |
---|---|
DOI: | 10.48550/arxiv.2409.10376 |