Assessing the Speaker Recognition Performance of Naive Listeners Using Mechanical Turk
In this paper we attempt to quantify the ability of naive listeners to perform speaker recognition in the context of the NIST evaluation task. We describe our protocol: a series of listening experiments using large numbers of naive listeners (432) on Amazon's Mechanical Turk that attempt to mea...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Report |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we attempt to quantify the ability of naive listeners to perform speaker recognition in the context of the NIST evaluation task. We describe our protocol: a series of listening experiments using large numbers of naive listeners (432) on Amazon's Mechanical Turk that attempt to measure the ability of the average human listener to performance speaker recognition. Our goal was the compare the performance of the average human listener to both forensic experts and state-of-the-art automatic systems. We show that naive listeners vary substantially in their performance, but that a voting of listeners can achieve performance similar to that of expert forensic examiners. |
---|