Robust facial 2D motion model estimation for 3D head pose extraction and automatic camera mouse implementation

In this paper, we present a novel approach to 3D head pose estimation from monocular camera images for the control of mouse pointer movements on the screen and clicking events. This work is motivated by the goal of providing a non-contact instrument to control the mouse pointer on a PC system for ha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nabati, M, Behrad, A
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we present a novel approach to 3D head pose estimation from monocular camera images for the control of mouse pointer movements on the screen and clicking events. This work is motivated by the goal of providing a non-contact instrument to control the mouse pointer on a PC system for handicapped people with severe disabilities using low-cost and widely available hardware. The required information is derived from video data captured using a monocular web camera mounted on the computer monitor. Our approach proceeds in six stages. First, the face area is extracted using Haar-like features and AdaBoost algorithm. Second, the locations of the point features are detected and tracked over video frames by LK algorithm. Third, the 2D transformation model between consecutive frames is estimated by matching features and robust RANSAC algorithm. Fourth, the estimated 2D transformation model is applied to four supposed points on the face area. Then, the 3D rotation matrix and translation vector between the web camera and 3D head pose are estimated using four points correspondences. Finally, the 3D rotation and translation matrix is applied for estimating the mouse pointer movements on the PC screen and clicking events. Experimental results showed the promise of the algorithm.
DOI:10.1109/ISTEL.2010.5734135