Twists, Humps, and Pebbles: Multilingual Speech Recognition Models Exhibit Gender Performance Gaps
Current automatic speech recognition (ASR) models are designed to be used across many languages and tasks without substantial changes. However, this broad language coverage hides performance gaps within languages, for example, across genders. Our study systematically evaluates the performance of two...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Current automatic speech recognition (ASR) models are designed to be used
across many languages and tasks without substantial changes. However, this
broad language coverage hides performance gaps within languages, for example,
across genders. Our study systematically evaluates the performance of two
widely used multilingual ASR models on three datasets, encompassing 19
languages from eight language families and two speaking conditions. Our
findings reveal clear gender disparities, with the advantaged group varying
across languages and models. Surprisingly, those gaps are not explained by
acoustic or lexical properties. However, probing internal model states reveals
a correlation with gendered performance gap. That is, the easier it is to
distinguish speaker gender in a language using probes, the more the gap
reduces, favoring female speakers. Our results show that gender disparities
persist even in state-of-the-art models. Our findings have implications for the
improvement of multilingual ASR systems, underscoring the importance of
accessibility to training data and nuanced evaluation to predict and mitigate
gender gaps. We release all code and artifacts at
https://github.com/g8a9/multilingual-asr-gender-gap. |
---|---|
DOI: | 10.48550/arxiv.2402.17954 |