Time Series Data of Gaze, Head Pose, Hand Pose, and Object Positions for Object Approaches with a Given Intention

This data set comprises time series data of gaze, head pose, hand pose, and object positions for object approaches with a given intention. The data was captured in the context of the following publication: Michael Fennel, Serge Garbay, Antonio Zea, Uwe D. Hanebeck, Intention Estimation with Recurren...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fennel, Michael, Garbay, Serge, Zea, Antonio, Hanebeck, Uwe D.
Format: Dataset
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This data set comprises time series data of gaze, head pose, hand pose, and object positions for object approaches with a given intention. The data was captured in the context of the following publication: Michael Fennel, Serge Garbay, Antonio Zea, Uwe D. Hanebeck, Intention Estimation with Recurrent Neural Networks for Mixed Reality Environments, Proceedings of the 26th International Conference on Information Fusion (Fusion 2023) (under review) A Microsoft Hololens 2 was used for recording the data at 60 fps under the modalities explained in detail in the above-mentioned paper. The file names are structured as follows: 1st/2nd: The data with "1st" contains approaches to randomly placed objects on a grid, which are rendered in augmented reality. The user is informed about the object to approach using a visual cue. This corresponds to Section IV-A. The data with "2nd" contains approaches to real objects placed statically in a room. The user is informed about the object to approach using a voice command. unfiltered: Contains all approaches, including those where the user disrespects the given commands. Filtering is done as described in the paper. train/val/test: The first dataset was split in a 70/20/10 ratio for training, validation, and test. Each data set contains the following columns. In each approach, 5 objects numbered from i=0 to i=4 are present. General: time: in seconds subject: consecutive subject number handedness: left (1), right (0) trial: consecutive trial number per subject target_label: index of the object to approach (0 to 4) Data in world coordinates: head_{x,y,z}: head position head_quat_{w,x,y,z}: head orientation quaternion W_gaze_{x,y,z}: gaze direction W_r_hand_{x,y,z}: right hand position W_r_hand_quat_{w,x,y,z}: right hand orientation quaternion W_l_hand_{x,y,z}: left hand position W_l_hand_quat_{w,x,y,z}: left hand orientation quaternion W_object_i_{x,y,z}: position of object i W_object_i_quat {w,x,y,z}: orientation quaternion of object i Data in egocentric coordinates (head coordinate system). This data is provided for convenience and can be derived from the other data: gaze_{x,y,z}: gaze direction r_hand_{x,y,z}: right hand position r_hand_quat_{w,x,y,z}: right hand orientation quaternion l_hand_{x,y,z}: left hand position l_hand_quat_{w,x,y,z}: left hand orientation quaternion object_i_{x,y,z}: position of object i object_i_quat {w,x,y,z}: orientation quaternion of object i Acknowledgment: This work was supported by the ROBDEKON
DOI:10.5281/zenodo.7687773