Emirati-accented speaker identification in each of neutral and shouted talking environments

This work is devoted to capturing Emirati-accented speech database (Arabic United Arab Emirates database) in each of neutral and shouted talking environments in order to study and enhance text-independent Emirati-accented “speaker identification performance in shouted environment” based on each of “...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of speech technology 2018-06, Vol.21 (2), p.265-278
Hauptverfasser: Shahin, Ismail, Nassif, Ali Bou, Bahutair, Mohammed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This work is devoted to capturing Emirati-accented speech database (Arabic United Arab Emirates database) in each of neutral and shouted talking environments in order to study and enhance text-independent Emirati-accented “speaker identification performance in shouted environment” based on each of “first-order circular suprasegmental hidden Markov models (CSPHMM1s), second-order circular suprasegmental hidden Markov models (CSPHMM2s), and third-order circular suprasegmental hidden Markov models (CSPHMM3s)” as classifiers. In this research, our database was collected from 50 Emirati native speakers (25 per gender) uttering eight common Emirati sentences in each of neutral and shouted talking environments. The extracted features of our collected database are called “Mel-Frequency Cepstral Coefficients (MFCCs)”. Our results show that average Emirati-accented speaker identification performance in neutral environment is 94.0, 95.2, and 95.9% based on CSPHMM1s, CSPHMM2s, and CSPHMM3s, respectively. On the other hand, the average performance in shouted environment is 51.3, 55.5, and 59.3% based, respectively, on “CSPHMM1s, CSPHMM2s, and CSPHMM3s”. The achieved “average speaker identification performance in shouted environment based on CSPHMM3s” is very similar to that obtained in “subjective assessment by human listeners”.
ISSN:1381-2416
1572-8110
DOI:10.1007/s10772-018-9502-0