TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets

We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet are not constrained. We collected a large unconstrained gaze dataset of tablet users, labeled Rice...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine vision and applications 2017-08, Vol.28 (5-6), p.445-461
Hauptverfasser: Huang, Qiong, Veeraraghavan, Ashok, Sabharwal, Ashutosh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 461
container_issue 5-6
container_start_page 445
container_title Machine vision and applications
container_volume 28
creator Huang, Qiong
Veeraraghavan, Ashok
Sabharwal, Ashutosh
description We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet are not constrained. We collected a large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all of which might impact gaze estimation accuracy. We made three major observations on the collected data and employed a baseline algorithm for analyzing the impact of several factors on gaze estimation accuracy. The baseline algorithm is based on multilevel HoG feature and Random Forests regressor, which achieves a mean error of 3.17 cm. We perform extensive evaluation on the impact of various practical factors such as person dependency, dataset size, race, wearing glasses and user posture on the gaze estimation accuracy.
doi_str_mv 10.1007/s00138-017-0852-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2262638074</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2262638074</sourcerecordid><originalsourceid>FETCH-LOGICAL-c458t-7096820487f949ea02ce9adaece2fdd52a33745e91ee2db78d571971323cb2eb3</originalsourceid><addsrcrecordid>eNp9kL1OwzAURi0EEuXnAdgsMRuuHSe22VAFBakSS5ktJ76pUqVJsN2hPD0uZWCBwbIln-_TvYeQGw53HEDdRwBeaAZcMdClYPKEzLgsBOOqMqdkBia_NRhxTi5i3ACAVErOCK5c3WNauE98oN4lFzFRN_h8XL-PXaTtGOhuaMYhpuC6AfPXNKELbmiQ1Zn3dJ3TFGPqti5140C7gW7HuuuRpu_2eEXOWtdHvP65L8n789Nq_sKWb4vX-eOSNbLUiSkwlRYgtWqNNOhANGicd9igaL0vhSsKJUs0HFH4WmlfKm4UL0TR1ALr4pLcHnunMH7s8kR2M-5C3iRaISpRFRqU_I_iRshsqlImU_xINWGMMWBrp5AXDHvLwR6c26Nzm53bg3N7aBbHTMzssMbwq_nP0BfUtYR6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2262638074</pqid></control><display><type>article</type><title>TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets</title><source>Springer Nature - Complete Springer Journals</source><creator>Huang, Qiong ; Veeraraghavan, Ashok ; Sabharwal, Ashutosh</creator><creatorcontrib>Huang, Qiong ; Veeraraghavan, Ashok ; Sabharwal, Ashutosh</creatorcontrib><description>We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet are not constrained. We collected a large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all of which might impact gaze estimation accuracy. We made three major observations on the collected data and employed a baseline algorithm for analyzing the impact of several factors on gaze estimation accuracy. The baseline algorithm is based on multilevel HoG feature and Random Forests regressor, which achieves a mean error of 3.17 cm. We perform extensive evaluation on the impact of various practical factors such as person dependency, dataset size, race, wearing glasses and user posture on the gaze estimation accuracy.</description><identifier>ISSN: 0932-8092</identifier><identifier>EISSN: 1432-1769</identifier><identifier>DOI: 10.1007/s00138-017-0852-4</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Accuracy ; Algorithms ; Communications Engineering ; Computer Science ; Datasets ; Dependence ; Image Processing and Computer Vision ; Impact analysis ; Multilevel ; Networks ; Original Paper ; Pattern Recognition ; Race ; Regression analysis ; Tablets ; Vision systems</subject><ispartof>Machine vision and applications, 2017-08, Vol.28 (5-6), p.445-461</ispartof><rights>Springer-Verlag GmbH Germany 2017</rights><rights>Copyright Springer Science &amp; Business Media 2017</rights><rights>Machine Vision and Applications is a copyright of Springer, (2017). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c458t-7096820487f949ea02ce9adaece2fdd52a33745e91ee2db78d571971323cb2eb3</citedby><cites>FETCH-LOGICAL-c458t-7096820487f949ea02ce9adaece2fdd52a33745e91ee2db78d571971323cb2eb3</cites><orcidid>0000-0001-9407-119X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00138-017-0852-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00138-017-0852-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Huang, Qiong</creatorcontrib><creatorcontrib>Veeraraghavan, Ashok</creatorcontrib><creatorcontrib>Sabharwal, Ashutosh</creatorcontrib><title>TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets</title><title>Machine vision and applications</title><addtitle>Machine Vision and Applications</addtitle><description>We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet are not constrained. We collected a large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all of which might impact gaze estimation accuracy. We made three major observations on the collected data and employed a baseline algorithm for analyzing the impact of several factors on gaze estimation accuracy. The baseline algorithm is based on multilevel HoG feature and Random Forests regressor, which achieves a mean error of 3.17 cm. We perform extensive evaluation on the impact of various practical factors such as person dependency, dataset size, race, wearing glasses and user posture on the gaze estimation accuracy.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Communications Engineering</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Dependence</subject><subject>Image Processing and Computer Vision</subject><subject>Impact analysis</subject><subject>Multilevel</subject><subject>Networks</subject><subject>Original Paper</subject><subject>Pattern Recognition</subject><subject>Race</subject><subject>Regression analysis</subject><subject>Tablets</subject><subject>Vision systems</subject><issn>0932-8092</issn><issn>1432-1769</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9kL1OwzAURi0EEuXnAdgsMRuuHSe22VAFBakSS5ktJ76pUqVJsN2hPD0uZWCBwbIln-_TvYeQGw53HEDdRwBeaAZcMdClYPKEzLgsBOOqMqdkBia_NRhxTi5i3ACAVErOCK5c3WNauE98oN4lFzFRN_h8XL-PXaTtGOhuaMYhpuC6AfPXNKELbmiQ1Zn3dJ3TFGPqti5140C7gW7HuuuRpu_2eEXOWtdHvP65L8n789Nq_sKWb4vX-eOSNbLUiSkwlRYgtWqNNOhANGicd9igaL0vhSsKJUs0HFH4WmlfKm4UL0TR1ALr4pLcHnunMH7s8kR2M-5C3iRaISpRFRqU_I_iRshsqlImU_xINWGMMWBrp5AXDHvLwR6c26Nzm53bg3N7aBbHTMzssMbwq_nP0BfUtYR6</recordid><startdate>20170801</startdate><enddate>20170801</enddate><creator>Huang, Qiong</creator><creator>Veeraraghavan, Ashok</creator><creator>Sabharwal, Ashutosh</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0001-9407-119X</orcidid></search><sort><creationdate>20170801</creationdate><title>TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets</title><author>Huang, Qiong ; Veeraraghavan, Ashok ; Sabharwal, Ashutosh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c458t-7096820487f949ea02ce9adaece2fdd52a33745e91ee2db78d571971323cb2eb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Communications Engineering</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Dependence</topic><topic>Image Processing and Computer Vision</topic><topic>Impact analysis</topic><topic>Multilevel</topic><topic>Networks</topic><topic>Original Paper</topic><topic>Pattern Recognition</topic><topic>Race</topic><topic>Regression analysis</topic><topic>Tablets</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huang, Qiong</creatorcontrib><creatorcontrib>Veeraraghavan, Ashok</creatorcontrib><creatorcontrib>Sabharwal, Ashutosh</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>Machine vision and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huang, Qiong</au><au>Veeraraghavan, Ashok</au><au>Sabharwal, Ashutosh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets</atitle><jtitle>Machine vision and applications</jtitle><stitle>Machine Vision and Applications</stitle><date>2017-08-01</date><risdate>2017</risdate><volume>28</volume><issue>5-6</issue><spage>445</spage><epage>461</epage><pages>445-461</pages><issn>0932-8092</issn><eissn>1432-1769</eissn><abstract>We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet are not constrained. We collected a large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all of which might impact gaze estimation accuracy. We made three major observations on the collected data and employed a baseline algorithm for analyzing the impact of several factors on gaze estimation accuracy. The baseline algorithm is based on multilevel HoG feature and Random Forests regressor, which achieves a mean error of 3.17 cm. We perform extensive evaluation on the impact of various practical factors such as person dependency, dataset size, race, wearing glasses and user posture on the gaze estimation accuracy.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00138-017-0852-4</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0001-9407-119X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0932-8092
ispartof Machine vision and applications, 2017-08, Vol.28 (5-6), p.445-461
issn 0932-8092
1432-1769
language eng
recordid cdi_proquest_journals_2262638074
source Springer Nature - Complete Springer Journals
subjects Accuracy
Algorithms
Communications Engineering
Computer Science
Datasets
Dependence
Image Processing and Computer Vision
Impact analysis
Multilevel
Networks
Original Paper
Pattern Recognition
Race
Regression analysis
Tablets
Vision systems
title TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T22%3A39%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TabletGaze:%20dataset%20and%20analysis%20for%20unconstrained%20appearance-based%20gaze%20estimation%20in%20mobile%20tablets&rft.jtitle=Machine%20vision%20and%20applications&rft.au=Huang,%20Qiong&rft.date=2017-08-01&rft.volume=28&rft.issue=5-6&rft.spage=445&rft.epage=461&rft.pages=445-461&rft.issn=0932-8092&rft.eissn=1432-1769&rft_id=info:doi/10.1007/s00138-017-0852-4&rft_dat=%3Cproquest_cross%3E2262638074%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2262638074&rft_id=info:pmid/&rfr_iscdi=true