Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking
How to Cite? Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Dataset |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | How to Cite?
Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10.1145/3517031.3529624
Abstract
Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.
Code and Data
The data folder contains one sample gaze recording file (named gaze_positions_4bwCZ9awAx_unfamiliar.csv) and the corresponding computed fixations (named fixation_4bwCZ9awAx_unfamiliar.csv).
Note: For data and privacy protection reasons, the corresponding video recording cannot be shared publicly. This vide is therefore published separately on a per-request basis. Check this link for requesting access.
The gaze positions file contains these information:
'gaze_timestamp': the timestamp of the gaze position, starting at 0 (start of recording).
'world_index': number of the frame on the scene camera video
'confidence': a quality measure not yet (June 2022) implemented by PupilLabs and therefore contains integers equal to 0. If you don't have this column, create a column with this header and set all values equal to 0.
'norm_pos_x': normalized x-position of the gaze
'norm_pos_y': normalized y-position of the gaze
The fixations file contains these information:
'id': incrementing id starting at 0
'time': the duration of the fixation
'world_index': frame index on the scene camera video related to this fixation
' |
---|---|
DOI: | 10.48436/we677-ntp71 |