A Subject Data of ARKitFace Dataset

The ARKitFace dataset is established for training and evaluating both 3D face shape and 6DoF in the setting of perspective projection. A total of 500 volunteers, aged 9 to 60, are invited to record the dataset. They sit in a random environment, and the 3D acquisition equipment is fixed in front of t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kao, Yueying Kao, Pan, Bowen Pan, Xu, Miao Xu, Lyu, Jiangjing Lyu, Zhu, Xiangyu Zhu, Chang, Yuanzhang Chang, Li, Xiaobo Li, Lei, Zhen Lei
Format: Dataset
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kao, Yueying Kao
Pan, Bowen Pan
Xu, Miao Xu
Lyu, Jiangjing Lyu
Zhu, Xiangyu Zhu
Chang, Yuanzhang Chang
Li, Xiaobo Li
Lei, Zhen Lei
description The ARKitFace dataset is established for training and evaluating both 3D face shape and 6DoF in the setting of perspective projection. A total of 500 volunteers, aged 9 to 60, are invited to record the dataset. They sit in a random environment, and the 3D acquisition equipment is fixed in front of them, with a distance ranging from about 0.3m to 0.9m. Each subject is asked to perform 33 specific expressions with two head movements (from looking left to looking right / from looking up to looking down). 3D acquisition equipment we used is an iPhone 11. The shape and location of human face are tracked by structured light sensor. The triangle mesh and 6DoF information of the RGB images are obtained by built-in ARKit toolbox. The triangle mesh is made up of 1,220 vertices and 2,304 triangles. In total, 902,724 2D facial images (resolution 1280 X 720 or 1440 X 1280) with ground-truth 3D mesh and 6DoF pose annotation are collected. All the 500 subjects consent to use their data. We will release all the subjects with 2D facial images, 3D mesh and 6DoF pose annotation under the authorization of all the subjects. We will not release their personal privacy information, including age, gender etc. We uploaded the data of a subject to IEEE DataPort as an example.
doi_str_mv 10.21227/jfbk-0j17
format Dataset
fullrecord <record><control><sourceid>datacite_PQ8</sourceid><recordid>TN_cdi_datacite_primary_10_21227_jfbk_0j17</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_21227_jfbk_0j17</sourcerecordid><originalsourceid>FETCH-datacite_primary_10_21227_jfbk_0j173</originalsourceid><addsrcrecordid>eNpjYBAyNNAzMjQyMtfPSkvK1jXIMjTnZFB2VAguTcpKTS5RcEksSVTIT1NwDPLOLHFLTE4FixSnlvAwsKYl5hSn8kJpbgYtN9cQZw_dFKB8cmZJanxBUWZuYlFlvKFBPNiGeJAN8SAbjElSDAAs4jEm</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>dataset</recordtype></control><display><type>dataset</type><title>A Subject Data of ARKitFace Dataset</title><source>DataCite</source><creator>Kao, Yueying Kao ; Pan, Bowen Pan ; Xu, Miao Xu ; Lyu, Jiangjing Lyu ; Zhu, Xiangyu Zhu ; Chang, Yuanzhang Chang ; Li, Xiaobo Li ; Lei, Zhen Lei</creator><creatorcontrib>Kao, Yueying Kao ; Pan, Bowen Pan ; Xu, Miao Xu ; Lyu, Jiangjing Lyu ; Zhu, Xiangyu Zhu ; Chang, Yuanzhang Chang ; Li, Xiaobo Li ; Lei, Zhen Lei</creatorcontrib><description>The ARKitFace dataset is established for training and evaluating both 3D face shape and 6DoF in the setting of perspective projection. A total of 500 volunteers, aged 9 to 60, are invited to record the dataset. They sit in a random environment, and the 3D acquisition equipment is fixed in front of them, with a distance ranging from about 0.3m to 0.9m. Each subject is asked to perform 33 specific expressions with two head movements (from looking left to looking right / from looking up to looking down). 3D acquisition equipment we used is an iPhone 11. The shape and location of human face are tracked by structured light sensor. The triangle mesh and 6DoF information of the RGB images are obtained by built-in ARKit toolbox. The triangle mesh is made up of 1,220 vertices and 2,304 triangles. In total, 902,724 2D facial images (resolution 1280 X 720 or 1440 X 1280) with ground-truth 3D mesh and 6DoF pose annotation are collected. All the 500 subjects consent to use their data. We will release all the subjects with 2D facial images, 3D mesh and 6DoF pose annotation under the authorization of all the subjects. We will not release their personal privacy information, including age, gender etc. We uploaded the data of a subject to IEEE DataPort as an example.</description><identifier>DOI: 10.21227/jfbk-0j17</identifier><language>eng</language><publisher>IEEE DataPort</publisher><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,1894</link.rule.ids><linktorsrc>$$Uhttps://commons.datacite.org/doi.org/10.21227/jfbk-0j17$$EView_record_in_DataCite.org$$FView_record_in_$$GDataCite.org$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Kao, Yueying Kao</creatorcontrib><creatorcontrib>Pan, Bowen Pan</creatorcontrib><creatorcontrib>Xu, Miao Xu</creatorcontrib><creatorcontrib>Lyu, Jiangjing Lyu</creatorcontrib><creatorcontrib>Zhu, Xiangyu Zhu</creatorcontrib><creatorcontrib>Chang, Yuanzhang Chang</creatorcontrib><creatorcontrib>Li, Xiaobo Li</creatorcontrib><creatorcontrib>Lei, Zhen Lei</creatorcontrib><title>A Subject Data of ARKitFace Dataset</title><description>The ARKitFace dataset is established for training and evaluating both 3D face shape and 6DoF in the setting of perspective projection. A total of 500 volunteers, aged 9 to 60, are invited to record the dataset. They sit in a random environment, and the 3D acquisition equipment is fixed in front of them, with a distance ranging from about 0.3m to 0.9m. Each subject is asked to perform 33 specific expressions with two head movements (from looking left to looking right / from looking up to looking down). 3D acquisition equipment we used is an iPhone 11. The shape and location of human face are tracked by structured light sensor. The triangle mesh and 6DoF information of the RGB images are obtained by built-in ARKit toolbox. The triangle mesh is made up of 1,220 vertices and 2,304 triangles. In total, 902,724 2D facial images (resolution 1280 X 720 or 1440 X 1280) with ground-truth 3D mesh and 6DoF pose annotation are collected. All the 500 subjects consent to use their data. We will release all the subjects with 2D facial images, 3D mesh and 6DoF pose annotation under the authorization of all the subjects. We will not release their personal privacy information, including age, gender etc. We uploaded the data of a subject to IEEE DataPort as an example.</description><fulltext>true</fulltext><rsrctype>dataset</rsrctype><creationdate>2022</creationdate><recordtype>dataset</recordtype><sourceid>PQ8</sourceid><recordid>eNpjYBAyNNAzMjQyMtfPSkvK1jXIMjTnZFB2VAguTcpKTS5RcEksSVTIT1NwDPLOLHFLTE4FixSnlvAwsKYl5hSn8kJpbgYtN9cQZw_dFKB8cmZJanxBUWZuYlFlvKFBPNiGeJAN8SAbjElSDAAs4jEm</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Kao, Yueying Kao</creator><creator>Pan, Bowen Pan</creator><creator>Xu, Miao Xu</creator><creator>Lyu, Jiangjing Lyu</creator><creator>Zhu, Xiangyu Zhu</creator><creator>Chang, Yuanzhang Chang</creator><creator>Li, Xiaobo Li</creator><creator>Lei, Zhen Lei</creator><general>IEEE DataPort</general><scope>DYCCY</scope><scope>PQ8</scope></search><sort><creationdate>2022</creationdate><title>A Subject Data of ARKitFace Dataset</title><author>Kao, Yueying Kao ; Pan, Bowen Pan ; Xu, Miao Xu ; Lyu, Jiangjing Lyu ; Zhu, Xiangyu Zhu ; Chang, Yuanzhang Chang ; Li, Xiaobo Li ; Lei, Zhen Lei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-datacite_primary_10_21227_jfbk_0j173</frbrgroupid><rsrctype>datasets</rsrctype><prefilter>datasets</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Kao, Yueying Kao</creatorcontrib><creatorcontrib>Pan, Bowen Pan</creatorcontrib><creatorcontrib>Xu, Miao Xu</creatorcontrib><creatorcontrib>Lyu, Jiangjing Lyu</creatorcontrib><creatorcontrib>Zhu, Xiangyu Zhu</creatorcontrib><creatorcontrib>Chang, Yuanzhang Chang</creatorcontrib><creatorcontrib>Li, Xiaobo Li</creatorcontrib><creatorcontrib>Lei, Zhen Lei</creatorcontrib><collection>DataCite (Open Access)</collection><collection>DataCite</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kao, Yueying Kao</au><au>Pan, Bowen Pan</au><au>Xu, Miao Xu</au><au>Lyu, Jiangjing Lyu</au><au>Zhu, Xiangyu Zhu</au><au>Chang, Yuanzhang Chang</au><au>Li, Xiaobo Li</au><au>Lei, Zhen Lei</au><format>book</format><genre>unknown</genre><ristype>DATA</ristype><title>A Subject Data of ARKitFace Dataset</title><date>2022</date><risdate>2022</risdate><abstract>The ARKitFace dataset is established for training and evaluating both 3D face shape and 6DoF in the setting of perspective projection. A total of 500 volunteers, aged 9 to 60, are invited to record the dataset. They sit in a random environment, and the 3D acquisition equipment is fixed in front of them, with a distance ranging from about 0.3m to 0.9m. Each subject is asked to perform 33 specific expressions with two head movements (from looking left to looking right / from looking up to looking down). 3D acquisition equipment we used is an iPhone 11. The shape and location of human face are tracked by structured light sensor. The triangle mesh and 6DoF information of the RGB images are obtained by built-in ARKit toolbox. The triangle mesh is made up of 1,220 vertices and 2,304 triangles. In total, 902,724 2D facial images (resolution 1280 X 720 or 1440 X 1280) with ground-truth 3D mesh and 6DoF pose annotation are collected. All the 500 subjects consent to use their data. We will release all the subjects with 2D facial images, 3D mesh and 6DoF pose annotation under the authorization of all the subjects. We will not release their personal privacy information, including age, gender etc. We uploaded the data of a subject to IEEE DataPort as an example.</abstract><pub>IEEE DataPort</pub><doi>10.21227/jfbk-0j17</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.21227/jfbk-0j17
ispartof
issn
language eng
recordid cdi_datacite_primary_10_21227_jfbk_0j17
source DataCite
title A Subject Data of ARKitFace Dataset
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T23%3A32%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-datacite_PQ8&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.au=Kao,%20Yueying%20Kao&rft.date=2022&rft_id=info:doi/10.21227/jfbk-0j17&rft_dat=%3Cdatacite_PQ8%3E10_21227_jfbk_0j17%3C/datacite_PQ8%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true