Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities

Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023, Vol.53 (2), p.2026-2041
Hauptverfasser: Wang, Quan, Wang, Hui, Dang, Ruo-Chen, Zhu, Guang-Pu, Pi, Hai-Feng, Shic, Frederick, Hu, Bing-liang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer of personal identity (ID-ResNet) that outperformed the current best result of 2.51 ∘ with MPIIGaze data, a benchmark dataset for gaze estimation. To avoid using manually labelled data, we used UnityEye synthetic images with and without style transformation as the training data. We exceeded the previously reported best result with MPIIGaze data (from 2.76 ∘ to 2.55 ∘ ) and UT-Multiview data (from 4.01 ∘ to 3.40 ∘ ). In addition, it only needs to fine-tune with a few ”calibration” examples for a new person to yield significant performance gains. In addition, we presented the KLBS-eye dataset that contains 15,350 images collected from 12 participants while looking in nine known directions and received the state-of-the-art result of (0.59 ± 1.69 ∘ ).
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-022-03481-9