Gestures and Lip Shape Integration for Cued Speech Recognition

In this article, automatic recognition of Cued Speech in French based on hidden Markov models (HMMs) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Heracleous, P, Hagita, N, Beautemps, D
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this article, automatic recognition of Cued Speech in French based on hidden Markov models (HMMs) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lipreading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using also multistream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme accuracy was 83.5%, and in the case of the deaf cuer 82.1%.
ISSN:1051-4651
2831-7475
DOI:10.1109/ICPR.2010.548