Learning to Detect Head Movement in Unconstrained Remote Gaze Estimation in the Wild
Unconstrained remote gaze estimation remains challenging mostly due to its vulnerability to the large variability in head-pose. Prior solutions struggle to maintain reliable accuracy in unconstrained remote gaze tracking. Among them, appearance-based solutions demonstrate tremendous potential in imp...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Unconstrained remote gaze estimation remains challenging mostly due to its
vulnerability to the large variability in head-pose. Prior solutions struggle
to maintain reliable accuracy in unconstrained remote gaze tracking. Among
them, appearance-based solutions demonstrate tremendous potential in improving
gaze accuracy. However, existing works still suffer from head movement and are
not robust enough to handle real-world scenarios. Especially most of them study
gaze estimation under controlled scenarios where the collected datasets often
cover limited ranges of both head-pose and gaze which introduces further bias.
In this paper, we propose novel end-to-end appearance-based gaze estimation
methods that could more robustly incorporate different levels of head-pose
representations into gaze estimation. Our method could generalize to real-world
scenarios with low image quality, different lightings and scenarios where
direct head-pose information is not available. To better demonstrate the
advantage of our methods, we further propose a new benchmark dataset with the
most rich distribution of head-gaze combination reflecting real-world
scenarios. Extensive evaluations on several public datasets and our own dataset
demonstrate that our method consistently outperforms the state-of-the-art by a
significant margin. |
---|---|
DOI: | 10.48550/arxiv.2004.03737 |