Applying wav2vec2 for Speech Recognition on Bengali Common Voices Dataset
Speech is inherently continuous, where discrete words, phonemes and other units are not clearly segmented, and so speech recognition has been an active research problem for decades. In this work we have fine-tuned wav2vec 2.0 to recognize and transcribe Bengali speech -- training it on the Bengali C...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Speech is inherently continuous, where discrete words, phonemes and other
units are not clearly segmented, and so speech recognition has been an active
research problem for decades. In this work we have fine-tuned wav2vec 2.0 to
recognize and transcribe Bengali speech -- training it on the Bengali Common
Voice Speech Dataset. After training for 71 epochs, on a training set
consisting of 36919 mp3 files, we achieved a training loss of 0.3172 and WER of
0.2524 on a validation set of size 7,747. Using a 5-gram language model, the
Levenshtein Distance was 2.6446 on a test set of size 7,747. Then the training
set and validation set were combined, shuffled and split into 85-15 ratio.
Training for 7 more epochs on this combined dataset yielded an improved
Levenshtein Distance of 2.60753 on the test set. Our model was the best
performing one, achieving a Levenshtein Distance of 6.234 on a hidden dataset,
which was 1.1049 units lower than other competing submissions. |
---|---|
DOI: | 10.48550/arxiv.2209.06581 |