Cross-attention conformer for context modeling in speech enhancement for ASR
This work introduces \emph{cross-attention conformer}, an attention-based architecture for context modeling in speech enhancement. Given that the context information can often be sequential, and of different length as the audio that is to be enhanced, we make use of cross-attention to summarize and...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work introduces \emph{cross-attention conformer}, an attention-based
architecture for context modeling in speech enhancement. Given that the context
information can often be sequential, and of different length as the audio that
is to be enhanced, we make use of cross-attention to summarize and merge
contextual information with input features. Building upon the recently proposed
conformer model that uses self attention layers as building blocks, the
proposed cross-attention conformer can be used to build deep contextual models.
As a concrete example, we show how noise context, i.e., short noise-only audio
segment preceding an utterance, can be used to build a speech enhancement
feature frontend using cross-attention conformer layers for improving noise
robustness of automatic speech recognition. |
---|---|
DOI: | 10.48550/arxiv.2111.00127 |