Investigating data partitioning strategies for crosslinguistic low-resource ASR evaluation
Many automatic speech recognition (ASR) data sets include a single pre-defined test set consisting of one or more speakers whose speech never appears in the training set. This "hold-speaker(s)-out" data partitioning strategy, however, may not be ideal for data sets in which the number of s...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many automatic speech recognition (ASR) data sets include a single
pre-defined test set consisting of one or more speakers whose speech never
appears in the training set. This "hold-speaker(s)-out" data partitioning
strategy, however, may not be ideal for data sets in which the number of
speakers is very small. This study investigates ten different data split
methods for five languages with minimal ASR training resources. We find that
(1) model performance varies greatly depending on which speaker is selected for
testing; (2) the average word error rate (WER) across all held-out speakers is
comparable not only to the average WER over multiple random splits but also to
any given individual random split; (3) WER is also generally comparable when
the data is split heuristically or adversarially; (4) utterance duration and
intensity are comparatively more predictive factors of variability regardless
of the data split. These results suggest that the widely used hold-speakers-out
approach to ASR data partitioning can yield results that do not reflect model
performance on unseen data or speakers. Random splits can yield more reliable
and generalizable estimates when facing data sparsity. |
---|---|
DOI: | 10.48550/arxiv.2208.12888 |