SD-QA: Spoken Dialectal Question Answering for the Real World
Question answering (QA) systems are now available through numerous commercial applications for a wide variety of domains, serving millions of users that interact with them via speech interfaces. However, current benchmarks in QA research do not account for the errors that speech recognition models m...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Question answering (QA) systems are now available through numerous commercial
applications for a wide variety of domains, serving millions of users that
interact with them via speech interfaces. However, current benchmarks in QA
research do not account for the errors that speech recognition models might
introduce, nor do they consider the language variations (dialects) of the
users. To address this gap, we augment an existing QA dataset to construct a
multi-dialect, spoken QA benchmark on five languages (Arabic, Bengali, English,
Kiswahili, Korean) with more than 68k audio prompts in 24 dialects from 255
speakers. We provide baseline results showcasing the real-world performance of
QA systems and analyze the effect of language variety and other sensitive
speaker attributes on downstream performance. Last, we study the fairness of
the ASR and QA models with respect to the underlying user populations. The
dataset, model outputs, and code for reproducing all our experiments are
available: https://github.com/ffaisal93/SD-QA. |
---|---|
DOI: | 10.48550/arxiv.2109.12072 |