Self-supervised Learning with Random-projection Quantizer for Speech Recognition
We present a simple and effective self-supervised learning approach for speech recognition. The approach learns a model to predict the masked speech signals, in the form of discrete labels generated with a random-projection quantizer. In particular the quantizer projects speech inputs with a randoml...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a simple and effective self-supervised learning approach for
speech recognition. The approach learns a model to predict the masked speech
signals, in the form of discrete labels generated with a random-projection
quantizer. In particular the quantizer projects speech inputs with a randomly
initialized matrix, and does a nearest-neighbor lookup in a
randomly-initialized codebook. Neither the matrix nor the codebook is updated
during self-supervised learning. Since the random-projection quantizer is not
trained and is separated from the speech recognition model, the design makes
the approach flexible and is compatible with universal speech recognition
architecture. On LibriSpeech our approach achieves similar word-error-rates as
previous work using self-supervised learning with non-streaming models, and
provides lower word-error-rates and latency than wav2vec 2.0 and w2v-BERT with
streaming models. On multilingual tasks the approach also provides significant
improvement over wav2vec 2.0 and w2v-BERT. |
---|---|
DOI: | 10.48550/arxiv.2202.01855 |