Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking

We introduce HOT3D, a publicly available dataset for egocentric hand and object tracking in 3D. The dataset offers over 833 minutes (more than 3.7M images) of multi-view RGB/monochrome image streams showing 19 subjects interacting with 33 diverse rigid objects, multi-modal signals such as eye gaze o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Banerjee, Prithviraj, Shkodrani, Sindi, Moulon, Pierre, Hampali, Shreyas, Zhang, Fan, Fountain, Jade, Miller, Edward, Basol, Selen, Newcombe, Richard, Wang, Robert, Engel, Jakob Julian, Hodan, Tomas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Banerjee, Prithviraj
Shkodrani, Sindi
Moulon, Pierre
Hampali, Shreyas
Zhang, Fan
Fountain, Jade
Miller, Edward
Basol, Selen
Newcombe, Richard
Wang, Robert
Engel, Jakob Julian
Hodan, Tomas
description We introduce HOT3D, a publicly available dataset for egocentric hand and object tracking in 3D. The dataset offers over 833 minutes (more than 3.7M images) of multi-view RGB/monochrome image streams showing 19 subjects interacting with 33 diverse rigid objects, multi-modal signals such as eye gaze or scene point clouds, as well as comprehensive ground truth annotations including 3D poses of objects, hands, and cameras, and 3D models of hands and objects. In addition to simple pick-up/observe/put-down actions, HOT3D contains scenarios resembling typical actions in a kitchen, office, and living room environment. The dataset is recorded by two head-mounted devices from Meta: Project Aria, a research prototype of light-weight AR/AI glasses, and Quest 3, a production VR headset sold in millions of units. Ground-truth poses were obtained by a professional motion-capture system using small optical markers attached to hands and objects. Hand annotations are provided in the UmeTrack and MANO formats and objects are represented by 3D meshes with PBR materials obtained by an in-house scanner. We aim to accelerate research on egocentric hand-object interaction by making the HOT3D dataset publicly available and by co-organizing public challenges on the dataset at ECCV 2024. The dataset can be downloaded from the project website: https://facebookresearch.github.io/hot3d/.
doi_str_mv 10.48550/arxiv.2406.09598
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_09598</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_09598</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-1675d42895b01e0798affa803aea97733633d122fe94a3aa29f68ea4bf792c393</originalsourceid><addsrcrecordid>eNotj7tOw0AURLehQIEPoOL-gJ313n3SRXGCI0Vy4966Xu9G5mFHG4PI35MEitEUozPSYeyp4Lm0SvElpZ_hOxeS65w75ew9q3bjnKb-yw_jAaq6wfIFViNsDpMPl2XwUNJMpzBDnBJgCRWNPVxTd2_Bz9Ak8u8X-IHdRfo4hcf_XrBmu2nWVbavX3fr1T4jbWxWaKN6KaxTHS8CN85SjGQ5UiBnDKJG7AshYnCSkEi4qG0g2UXjhEeHC_b8d3tTaY9p-KR0bq9K7U0JfwEooURs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking</title><source>arXiv.org</source><creator>Banerjee, Prithviraj ; Shkodrani, Sindi ; Moulon, Pierre ; Hampali, Shreyas ; Zhang, Fan ; Fountain, Jade ; Miller, Edward ; Basol, Selen ; Newcombe, Richard ; Wang, Robert ; Engel, Jakob Julian ; Hodan, Tomas</creator><creatorcontrib>Banerjee, Prithviraj ; Shkodrani, Sindi ; Moulon, Pierre ; Hampali, Shreyas ; Zhang, Fan ; Fountain, Jade ; Miller, Edward ; Basol, Selen ; Newcombe, Richard ; Wang, Robert ; Engel, Jakob Julian ; Hodan, Tomas</creatorcontrib><description>We introduce HOT3D, a publicly available dataset for egocentric hand and object tracking in 3D. The dataset offers over 833 minutes (more than 3.7M images) of multi-view RGB/monochrome image streams showing 19 subjects interacting with 33 diverse rigid objects, multi-modal signals such as eye gaze or scene point clouds, as well as comprehensive ground truth annotations including 3D poses of objects, hands, and cameras, and 3D models of hands and objects. In addition to simple pick-up/observe/put-down actions, HOT3D contains scenarios resembling typical actions in a kitchen, office, and living room environment. The dataset is recorded by two head-mounted devices from Meta: Project Aria, a research prototype of light-weight AR/AI glasses, and Quest 3, a production VR headset sold in millions of units. Ground-truth poses were obtained by a professional motion-capture system using small optical markers attached to hands and objects. Hand annotations are provided in the UmeTrack and MANO formats and objects are represented by 3D meshes with PBR materials obtained by an in-house scanner. We aim to accelerate research on egocentric hand-object interaction by making the HOT3D dataset publicly available and by co-organizing public challenges on the dataset at ECCV 2024. The dataset can be downloaded from the project website: https://facebookresearch.github.io/hot3d/.</description><identifier>DOI: 10.48550/arxiv.2406.09598</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.09598$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.09598$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Banerjee, Prithviraj</creatorcontrib><creatorcontrib>Shkodrani, Sindi</creatorcontrib><creatorcontrib>Moulon, Pierre</creatorcontrib><creatorcontrib>Hampali, Shreyas</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Fountain, Jade</creatorcontrib><creatorcontrib>Miller, Edward</creatorcontrib><creatorcontrib>Basol, Selen</creatorcontrib><creatorcontrib>Newcombe, Richard</creatorcontrib><creatorcontrib>Wang, Robert</creatorcontrib><creatorcontrib>Engel, Jakob Julian</creatorcontrib><creatorcontrib>Hodan, Tomas</creatorcontrib><title>Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking</title><description>We introduce HOT3D, a publicly available dataset for egocentric hand and object tracking in 3D. The dataset offers over 833 minutes (more than 3.7M images) of multi-view RGB/monochrome image streams showing 19 subjects interacting with 33 diverse rigid objects, multi-modal signals such as eye gaze or scene point clouds, as well as comprehensive ground truth annotations including 3D poses of objects, hands, and cameras, and 3D models of hands and objects. In addition to simple pick-up/observe/put-down actions, HOT3D contains scenarios resembling typical actions in a kitchen, office, and living room environment. The dataset is recorded by two head-mounted devices from Meta: Project Aria, a research prototype of light-weight AR/AI glasses, and Quest 3, a production VR headset sold in millions of units. Ground-truth poses were obtained by a professional motion-capture system using small optical markers attached to hands and objects. Hand annotations are provided in the UmeTrack and MANO formats and objects are represented by 3D meshes with PBR materials obtained by an in-house scanner. We aim to accelerate research on egocentric hand-object interaction by making the HOT3D dataset publicly available and by co-organizing public challenges on the dataset at ECCV 2024. The dataset can be downloaded from the project website: https://facebookresearch.github.io/hot3d/.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tOw0AURLehQIEPoOL-gJ313n3SRXGCI0Vy4966Xu9G5mFHG4PI35MEitEUozPSYeyp4Lm0SvElpZ_hOxeS65w75ew9q3bjnKb-yw_jAaq6wfIFViNsDpMPl2XwUNJMpzBDnBJgCRWNPVxTd2_Bz9Ak8u8X-IHdRfo4hcf_XrBmu2nWVbavX3fr1T4jbWxWaKN6KaxTHS8CN85SjGQ5UiBnDKJG7AshYnCSkEi4qG0g2UXjhEeHC_b8d3tTaY9p-KR0bq9K7U0JfwEooURs</recordid><startdate>20240613</startdate><enddate>20240613</enddate><creator>Banerjee, Prithviraj</creator><creator>Shkodrani, Sindi</creator><creator>Moulon, Pierre</creator><creator>Hampali, Shreyas</creator><creator>Zhang, Fan</creator><creator>Fountain, Jade</creator><creator>Miller, Edward</creator><creator>Basol, Selen</creator><creator>Newcombe, Richard</creator><creator>Wang, Robert</creator><creator>Engel, Jakob Julian</creator><creator>Hodan, Tomas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240613</creationdate><title>Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking</title><author>Banerjee, Prithviraj ; Shkodrani, Sindi ; Moulon, Pierre ; Hampali, Shreyas ; Zhang, Fan ; Fountain, Jade ; Miller, Edward ; Basol, Selen ; Newcombe, Richard ; Wang, Robert ; Engel, Jakob Julian ; Hodan, Tomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-1675d42895b01e0798affa803aea97733633d122fe94a3aa29f68ea4bf792c393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Banerjee, Prithviraj</creatorcontrib><creatorcontrib>Shkodrani, Sindi</creatorcontrib><creatorcontrib>Moulon, Pierre</creatorcontrib><creatorcontrib>Hampali, Shreyas</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Fountain, Jade</creatorcontrib><creatorcontrib>Miller, Edward</creatorcontrib><creatorcontrib>Basol, Selen</creatorcontrib><creatorcontrib>Newcombe, Richard</creatorcontrib><creatorcontrib>Wang, Robert</creatorcontrib><creatorcontrib>Engel, Jakob Julian</creatorcontrib><creatorcontrib>Hodan, Tomas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Banerjee, Prithviraj</au><au>Shkodrani, Sindi</au><au>Moulon, Pierre</au><au>Hampali, Shreyas</au><au>Zhang, Fan</au><au>Fountain, Jade</au><au>Miller, Edward</au><au>Basol, Selen</au><au>Newcombe, Richard</au><au>Wang, Robert</au><au>Engel, Jakob Julian</au><au>Hodan, Tomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking</atitle><date>2024-06-13</date><risdate>2024</risdate><abstract>We introduce HOT3D, a publicly available dataset for egocentric hand and object tracking in 3D. The dataset offers over 833 minutes (more than 3.7M images) of multi-view RGB/monochrome image streams showing 19 subjects interacting with 33 diverse rigid objects, multi-modal signals such as eye gaze or scene point clouds, as well as comprehensive ground truth annotations including 3D poses of objects, hands, and cameras, and 3D models of hands and objects. In addition to simple pick-up/observe/put-down actions, HOT3D contains scenarios resembling typical actions in a kitchen, office, and living room environment. The dataset is recorded by two head-mounted devices from Meta: Project Aria, a research prototype of light-weight AR/AI glasses, and Quest 3, a production VR headset sold in millions of units. Ground-truth poses were obtained by a professional motion-capture system using small optical markers attached to hands and objects. Hand annotations are provided in the UmeTrack and MANO formats and objects are represented by 3D meshes with PBR materials obtained by an in-house scanner. We aim to accelerate research on egocentric hand-object interaction by making the HOT3D dataset publicly available and by co-organizing public challenges on the dataset at ECCV 2024. The dataset can be downloaded from the project website: https://facebookresearch.github.io/hot3d/.</abstract><doi>10.48550/arxiv.2406.09598</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.09598
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_09598
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T17%3A52%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Introducing%20HOT3D:%20An%20Egocentric%20Dataset%20for%203D%20Hand%20and%20Object%20Tracking&rft.au=Banerjee,%20Prithviraj&rft.date=2024-06-13&rft_id=info:doi/10.48550/arxiv.2406.09598&rft_dat=%3Carxiv_GOX%3E2406_09598%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true