SIGN LANGUAGE SUBTITLING BY HIGHLY COMPREHENSIBLE 'SEMANTROMS'

We introduce a new method of sign language subtitling aimed at young deaf children who have not acquired reading skills yet, and can communicate only via signs. The method is based on: 1) the recently developed concept of 'semantroidTM' (an animated 3D avatar limited to head and hands); 2)...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of educational technology systems 2006-01, Vol.35 (1), p.61-87
Hauptverfasser: Adamo-Villani, Nicoletta, Beni, Gerardo
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We introduce a new method of sign language subtitling aimed at young deaf children who have not acquired reading skills yet, and can communicate only via signs. The method is based on: 1) the recently developed concept of 'semantroidTM' (an animated 3D avatar limited to head and hands); 2) the design, development, and psychophysical evaluation of a highly comprehensible model of the semantroid; and 3) the implementation of a new multi-window, scrolling captioning technique. Based on 'semantic intensity' estimates, we have enhanced the comprehensibility of the semantroid by: i) the use of non-photorealistic rendering (NPR); and ii) the creation of a 3D face model with distinctive features. We have then validated the comprehensibility of the semantroid through a series of tests on human subjects which assessed accuracy and speed of recognition of facial stimuli and hand gestures as a function of mode of representation and facial geometry. Test results show that, in the context of sign language subtitling (i.e., in limited space), the most comprehensible semantroid model is a toon-rendered model with distinctive facial features. Because of its enhanced comprehensibility, this type of semantroid can be scaled to fit in a very small area, and thus it is possible to display multiple captioning windows simultaneously.
ISSN:0047-2395