DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation Detection and Correction
Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google Assistant, to name a few - play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A text-to-speech (TTS) module allows PDAs to interact in a natural, h...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google
Assistant, to name a few - play an increasingly important role to access
information and complete tasks spanning multiple domains, and by diverse groups
of users. A text-to-speech (TTS) module allows PDAs to interact in a natural,
human-like manner, and play a vital role when the interaction involves people
with visual impairments or other disabilities. To cater to the needs of a
diverse set of users, inclusive TTS is important to recognize and pronounce
correctly text in different languages and dialects. Despite great progress in
speech synthesis, the pronunciation accuracy of named entities in a
multi-lingual setting still has a large room for improvement. Existing
approaches to correct named entity (NE) mispronunciations, like retraining
Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation
dictionary, require expensive annotation of the ground truth pronunciation,
which is also time consuming. In this work, we present a highly-precise,
PDA-compatible pronunciation learning framework for the task of TTS
mispronunciation detection and correction. In addition, we also propose a novel
mispronunciation detection model called DTW-SiameseNet, which employs metric
learning with a Siamese architecture for Dynamic Time Warping (DTW) with
triplet loss. We demonstrate that a locale-agnostic, privacy-preserving
solution to the problem of TTS mispronunciation detection is feasible. We
evaluate our approach on a real-world dataset, and a corpus of NE
pronunciations of an anonymized audio dataset of person names recorded by
participants from 10 different locales. Human evaluation shows our proposed
approach improves pronunciation accuracy on average by ~6% compared to strong
phoneme-based and audio-based baselines. |
---|---|
DOI: | 10.48550/arxiv.2303.00171 |