A new method for automatic tracking of facial landmarks in 3D motion captured images (4D)

Abstract The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18–35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5 mm pen. The s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of oral and maxillofacial surgery 2013-01, Vol.42 (1), p.9-18
Hauptverfasser: Al-Anezi, T, Khambay, B, Peng, M.J, O’Leary, E, Ju, X, Ayoub, A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Abstract The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18–35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5 mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x , y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17 mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.
ISSN:0901-5027
1399-0020
DOI:10.1016/j.ijom.2012.10.035