Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking

How to Cite? Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Alinaghi, Negar
Format: Dataset
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Alinaghi, Negar
description How to Cite? Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10.1145/3517031.3529624 Abstract Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.    Code and Data The data folder contains one sample gaze recording file (named gaze_positions_4bwCZ9awAx_unfamiliar.csv) and the corresponding computed fixations (named fixation_4bwCZ9awAx_unfamiliar.csv). Note: For data and privacy protection reasons, the corresponding video recording cannot be shared publicly. This vide is therefore published separately on a per-request basis. Check this link for requesting access. The gaze positions file contains these information: 'gaze_timestamp': the timestamp of the gaze position, starting at 0 (start of recording). 'world_index': number of the frame on the scene camera video 'confidence': a quality measure not yet (June 2022) implemented by PupilLabs and therefore contains integers equal to 0. If you don't have this column, create a column with this header and set all values equal to 0. 'norm_pos_x': normalized x-position of the gaze 'norm_pos_y': normalized y-position of the gaze  The fixations file contains these information: 'id': incrementing id starting at 0 'time': the duration of the fixation 'world_index': frame index on the scene camera video related to this fixation '
doi_str_mv 10.48436/we677-ntp71
format Dataset
fullrecord <record><control><sourceid>datacite_PQ8</sourceid><recordid>TN_cdi_datacite_primary_10_48436_we677_ntp71</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_48436_we677_ntp71</sourcerecordid><originalsourceid>FETCH-datacite_primary_10_48436_we677_ntp713</originalsourceid><addsrcrecordid>eNqVzrEOgjAQgOEuDkbdfIBzF4VAwJ2gLE66N2d76kVoSTk1vL2G-AJO__IPn1LLJN5kuyzNt2_KiyJy0hXJVB1K73q2FEDuBDWhhaN_UUtO-hWc0Bi0BKVvu6egsHfA7ntcuCGoBorOAc2D3W2uJldselr8OlPrfXUu68iioGEh3QVuMQw6ifXo0KNDj470z_0D8ZlBBA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>dataset</recordtype></control><display><type>dataset</type><title>Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking</title><source>DataCite</source><creator>Alinaghi, Negar</creator><creatorcontrib>Alinaghi, Negar</creatorcontrib><description>How to Cite? Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10.1145/3517031.3529624 Abstract Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.    Code and Data The data folder contains one sample gaze recording file (named gaze_positions_4bwCZ9awAx_unfamiliar.csv) and the corresponding computed fixations (named fixation_4bwCZ9awAx_unfamiliar.csv). Note: For data and privacy protection reasons, the corresponding video recording cannot be shared publicly. This vide is therefore published separately on a per-request basis. Check this link for requesting access. The gaze positions file contains these information: 'gaze_timestamp': the timestamp of the gaze position, starting at 0 (start of recording). 'world_index': number of the frame on the scene camera video 'confidence': a quality measure not yet (June 2022) implemented by PupilLabs and therefore contains integers equal to 0. If you don't have this column, create a column with this header and set all values equal to 0. 'norm_pos_x': normalized x-position of the gaze 'norm_pos_y': normalized y-position of the gaze  The fixations file contains these information: 'id': incrementing id starting at 0 'time': the duration of the fixation 'world_index': frame index on the scene camera video related to this fixation 'x_mean': normalized x-position of the fixation 'y_mean': normalized y-position of the fixation 'start_frame': the first frame that contains the fixation point 'end_frame': the last frame that contains the fixation point 'dispersion': the computed dispersion of the fixation The idt.py is the python implementation of the IDT algorithm we used for this paper to compute the fixations from the gaze positions. If you want to use your pre-computed fixations (not using our IDT implementation), just make sure that your fixation file contains the columns mentioned above. In this case just run the main.py using the video and the fixation csv file. The fixation file and the video file are the two inputs for the main.py which is the algorithm we proposed for the saccadic corrections. main.py creates two outputs: a csv file containing the fixations with two added columns: transformed_x,transformed_y which show the projected x and y coordinate of the fixation. a csv file containing the computed saccade length and azimuth based on these newly projected coordinates.  An Ethical Note The data collected for this study was reviewed by the  Pilot Research Ethics Committee at TU Wien. The participants gave written consent for their data to be used for research purposes. We also maintained the transparency of the video recordings in public spaces by wearing a sign indicating that a video recording was in progress. License All data is published under the CC-BY 4.0 license. The code is under the MIT license.</description><identifier>DOI: 10.48436/we677-ntp71</identifier><language>eng</language><publisher>TU Wien</publisher><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,1887</link.rule.ids><linktorsrc>$$Uhttps://commons.datacite.org/doi.org/10.48436/we677-ntp71$$EView_record_in_DataCite.org$$FView_record_in_$$GDataCite.org$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Alinaghi, Negar</creatorcontrib><title>Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking</title><description>How to Cite? Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10.1145/3517031.3529624 Abstract Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.    Code and Data The data folder contains one sample gaze recording file (named gaze_positions_4bwCZ9awAx_unfamiliar.csv) and the corresponding computed fixations (named fixation_4bwCZ9awAx_unfamiliar.csv). Note: For data and privacy protection reasons, the corresponding video recording cannot be shared publicly. This vide is therefore published separately on a per-request basis. Check this link for requesting access. The gaze positions file contains these information: 'gaze_timestamp': the timestamp of the gaze position, starting at 0 (start of recording). 'world_index': number of the frame on the scene camera video 'confidence': a quality measure not yet (June 2022) implemented by PupilLabs and therefore contains integers equal to 0. If you don't have this column, create a column with this header and set all values equal to 0. 'norm_pos_x': normalized x-position of the gaze 'norm_pos_y': normalized y-position of the gaze  The fixations file contains these information: 'id': incrementing id starting at 0 'time': the duration of the fixation 'world_index': frame index on the scene camera video related to this fixation 'x_mean': normalized x-position of the fixation 'y_mean': normalized y-position of the fixation 'start_frame': the first frame that contains the fixation point 'end_frame': the last frame that contains the fixation point 'dispersion': the computed dispersion of the fixation The idt.py is the python implementation of the IDT algorithm we used for this paper to compute the fixations from the gaze positions. If you want to use your pre-computed fixations (not using our IDT implementation), just make sure that your fixation file contains the columns mentioned above. In this case just run the main.py using the video and the fixation csv file. The fixation file and the video file are the two inputs for the main.py which is the algorithm we proposed for the saccadic corrections. main.py creates two outputs: a csv file containing the fixations with two added columns: transformed_x,transformed_y which show the projected x and y coordinate of the fixation. a csv file containing the computed saccade length and azimuth based on these newly projected coordinates.  An Ethical Note The data collected for this study was reviewed by the  Pilot Research Ethics Committee at TU Wien. The participants gave written consent for their data to be used for research purposes. We also maintained the transparency of the video recordings in public spaces by wearing a sign indicating that a video recording was in progress. License All data is published under the CC-BY 4.0 license. The code is under the MIT license.</description><fulltext>true</fulltext><rsrctype>dataset</rsrctype><creationdate>2024</creationdate><recordtype>dataset</recordtype><sourceid>PQ8</sourceid><recordid>eNqVzrEOgjAQgOEuDkbdfIBzF4VAwJ2gLE66N2d76kVoSTk1vL2G-AJO__IPn1LLJN5kuyzNt2_KiyJy0hXJVB1K73q2FEDuBDWhhaN_UUtO-hWc0Bi0BKVvu6egsHfA7ntcuCGoBorOAc2D3W2uJldselr8OlPrfXUu68iioGEh3QVuMQw6ifXo0KNDj470z_0D8ZlBBA</recordid><startdate>20240710</startdate><enddate>20240710</enddate><creator>Alinaghi, Negar</creator><general>TU Wien</general><scope>DYCCY</scope><scope>PQ8</scope></search><sort><creationdate>20240710</creationdate><title>Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking</title><author>Alinaghi, Negar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-datacite_primary_10_48436_we677_ntp713</frbrgroupid><rsrctype>datasets</rsrctype><prefilter>datasets</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Alinaghi, Negar</creatorcontrib><collection>DataCite (Open Access)</collection><collection>DataCite</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Alinaghi, Negar</au><format>book</format><genre>unknown</genre><ristype>DATA</ristype><title>Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking</title><date>2024-07-10</date><risdate>2024</risdate><abstract>How to Cite? Negar Alinaghi and Ioannis Giannopoulos. 2022. Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–7. https://doi.org/10.1145/3517031.3529624 Abstract Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.    Code and Data The data folder contains one sample gaze recording file (named gaze_positions_4bwCZ9awAx_unfamiliar.csv) and the corresponding computed fixations (named fixation_4bwCZ9awAx_unfamiliar.csv). Note: For data and privacy protection reasons, the corresponding video recording cannot be shared publicly. This vide is therefore published separately on a per-request basis. Check this link for requesting access. The gaze positions file contains these information: 'gaze_timestamp': the timestamp of the gaze position, starting at 0 (start of recording). 'world_index': number of the frame on the scene camera video 'confidence': a quality measure not yet (June 2022) implemented by PupilLabs and therefore contains integers equal to 0. If you don't have this column, create a column with this header and set all values equal to 0. 'norm_pos_x': normalized x-position of the gaze 'norm_pos_y': normalized y-position of the gaze  The fixations file contains these information: 'id': incrementing id starting at 0 'time': the duration of the fixation 'world_index': frame index on the scene camera video related to this fixation 'x_mean': normalized x-position of the fixation 'y_mean': normalized y-position of the fixation 'start_frame': the first frame that contains the fixation point 'end_frame': the last frame that contains the fixation point 'dispersion': the computed dispersion of the fixation The idt.py is the python implementation of the IDT algorithm we used for this paper to compute the fixations from the gaze positions. If you want to use your pre-computed fixations (not using our IDT implementation), just make sure that your fixation file contains the columns mentioned above. In this case just run the main.py using the video and the fixation csv file. The fixation file and the video file are the two inputs for the main.py which is the algorithm we proposed for the saccadic corrections. main.py creates two outputs: a csv file containing the fixations with two added columns: transformed_x,transformed_y which show the projected x and y coordinate of the fixation. a csv file containing the computed saccade length and azimuth based on these newly projected coordinates.  An Ethical Note The data collected for this study was reviewed by the  Pilot Research Ethics Committee at TU Wien. The participants gave written consent for their data to be used for research purposes. We also maintained the transparency of the video recordings in public spaces by wearing a sign indicating that a video recording was in progress. License All data is published under the CC-BY 4.0 license. The code is under the MIT license.</abstract><pub>TU Wien</pub><doi>10.48436/we677-ntp71</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48436/we677-ntp71
ispartof
issn
language eng
recordid cdi_datacite_primary_10_48436_we677_ntp71
source DataCite
title Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T02%3A43%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-datacite_PQ8&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.au=Alinaghi,%20Negar&rft.date=2024-07-10&rft_id=info:doi/10.48436/we677-ntp71&rft_dat=%3Cdatacite_PQ8%3E10_48436_we677_ntp71%3C/datacite_PQ8%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true