Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities

Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023, Vol.53 (2), p.2026-2041
Hauptverfasser: Wang, Quan, Wang, Hui, Dang, Ruo-Chen, Zhu, Guang-Pu, Pi, Hai-Feng, Shic, Frederick, Hu, Bing-liang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2041
container_issue 2
container_start_page 2026
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 53
creator Wang, Quan
Wang, Hui
Dang, Ruo-Chen
Zhu, Guang-Pu
Pi, Hai-Feng
Shic, Frederick
Hu, Bing-liang
description Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer of personal identity (ID-ResNet) that outperformed the current best result of 2.51 ∘ with MPIIGaze data, a benchmark dataset for gaze estimation. To avoid using manually labelled data, we used UnityEye synthetic images with and without style transformation as the training data. We exceeded the previously reported best result with MPIIGaze data (from 2.76 ∘ to 2.55 ∘ ) and UT-Multiview data (from 4.01 ∘ to 3.40 ∘ ). In addition, it only needs to fine-tune with a few ”calibration” examples for a new person to yield significant performance gains. In addition, we presented the KLBS-eye dataset that contains 15,350 images collected from 12 participants while looking in nine known directions and received the state-of-the-art result of (0.59 ± 1.69 ∘ ).
doi_str_mv 10.1007/s10489-022-03481-9
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2760352092</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2760352092</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-a6eb6e14cd12121a1e9e1eff23de249040c11bcf70673cac0ea453805857d5ea3</originalsourceid><addsrcrecordid>eNp9kE1LxDAQhoMouK7-AU8Bz9VJ0q8cZfELBA8qeAtpM-1m7bZrkrLUsz_cuCt4kzkMzDzvy8xLyDmDSwZQXHkGaSkT4DwBkZYskQdkxrJCJEUqi0MyA8nTJM_l2zE58X4FAEIAm5Gv5zB1SIPTvW8Gt0ZD_dSHJQZbU7vWLXoa59Sh7uh2cJ2hrf5Eij7EbbBDT6uJjt72bWS8NWPkehzdroWoeKdbG5YU1xUaE-036PzQx7U12AcbLPpTctTozuPZb5-T19ubl8V98vh097C4fkxqwWRIdI5VjiytDeOxNEOJDJuGC4M8lZBCzVhVNwXkhah1DajTTJSQlVlhMtRiTi72vhs3fIzxBbUaRhdv8YoXOYiMx5gixfdU7QbvHTZq4-KvblIM1E_aap-2immrXdpKRpHYi3yE-xbdn_U_qm-zBIYf</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2760352092</pqid></control><display><type>article</type><title>Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities</title><source>SpringerLink Journals</source><creator>Wang, Quan ; Wang, Hui ; Dang, Ruo-Chen ; Zhu, Guang-Pu ; Pi, Hai-Feng ; Shic, Frederick ; Hu, Bing-liang</creator><creatorcontrib>Wang, Quan ; Wang, Hui ; Dang, Ruo-Chen ; Zhu, Guang-Pu ; Pi, Hai-Feng ; Shic, Frederick ; Hu, Bing-liang</creatorcontrib><description>Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer of personal identity (ID-ResNet) that outperformed the current best result of 2.51 ∘ with MPIIGaze data, a benchmark dataset for gaze estimation. To avoid using manually labelled data, we used UnityEye synthetic images with and without style transformation as the training data. We exceeded the previously reported best result with MPIIGaze data (from 2.76 ∘ to 2.55 ∘ ) and UT-Multiview data (from 4.01 ∘ to 3.40 ∘ ). In addition, it only needs to fine-tune with a few ”calibration” examples for a new person to yield significant performance gains. In addition, we presented the KLBS-eye dataset that contains 15,350 images collected from 12 participants while looking in nine known directions and received the state-of-the-art result of (0.59 ± 1.69 ∘ ).</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-03481-9</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Computer Science ; Datasets ; Embedding ; Machines ; Manufacturing ; Mechanical Engineering ; Neural networks ; Processes ; Synthetic data</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2023, Vol.53 (2), p.2026-2041</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-a6eb6e14cd12121a1e9e1eff23de249040c11bcf70673cac0ea453805857d5ea3</citedby><cites>FETCH-LOGICAL-c319t-a6eb6e14cd12121a1e9e1eff23de249040c11bcf70673cac0ea453805857d5ea3</cites><orcidid>0000-0001-6086-4191</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-03481-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-03481-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Wang, Quan</creatorcontrib><creatorcontrib>Wang, Hui</creatorcontrib><creatorcontrib>Dang, Ruo-Chen</creatorcontrib><creatorcontrib>Zhu, Guang-Pu</creatorcontrib><creatorcontrib>Pi, Hai-Feng</creatorcontrib><creatorcontrib>Shic, Frederick</creatorcontrib><creatorcontrib>Hu, Bing-liang</creatorcontrib><title>Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer of personal identity (ID-ResNet) that outperformed the current best result of 2.51 ∘ with MPIIGaze data, a benchmark dataset for gaze estimation. To avoid using manually labelled data, we used UnityEye synthetic images with and without style transformation as the training data. We exceeded the previously reported best result with MPIIGaze data (from 2.76 ∘ to 2.55 ∘ ) and UT-Multiview data (from 4.01 ∘ to 3.40 ∘ ). In addition, it only needs to fine-tune with a few ”calibration” examples for a new person to yield significant performance gains. In addition, we presented the KLBS-eye dataset that contains 15,350 images collected from 12 participants while looking in nine known directions and received the state-of-the-art result of (0.59 ± 1.69 ∘ ).</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Embedding</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Neural networks</subject><subject>Processes</subject><subject>Synthetic data</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9kE1LxDAQhoMouK7-AU8Bz9VJ0q8cZfELBA8qeAtpM-1m7bZrkrLUsz_cuCt4kzkMzDzvy8xLyDmDSwZQXHkGaSkT4DwBkZYskQdkxrJCJEUqi0MyA8nTJM_l2zE58X4FAEIAm5Gv5zB1SIPTvW8Gt0ZD_dSHJQZbU7vWLXoa59Sh7uh2cJ2hrf5Eij7EbbBDT6uJjt72bWS8NWPkehzdroWoeKdbG5YU1xUaE-036PzQx7U12AcbLPpTctTozuPZb5-T19ubl8V98vh097C4fkxqwWRIdI5VjiytDeOxNEOJDJuGC4M8lZBCzVhVNwXkhah1DajTTJSQlVlhMtRiTi72vhs3fIzxBbUaRhdv8YoXOYiMx5gixfdU7QbvHTZq4-KvblIM1E_aap-2immrXdpKRpHYi3yE-xbdn_U_qm-zBIYf</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>Wang, Quan</creator><creator>Wang, Hui</creator><creator>Dang, Ruo-Chen</creator><creator>Zhu, Guang-Pu</creator><creator>Pi, Hai-Feng</creator><creator>Shic, Frederick</creator><creator>Hu, Bing-liang</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-6086-4191</orcidid></search><sort><creationdate>2023</creationdate><title>Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities</title><author>Wang, Quan ; Wang, Hui ; Dang, Ruo-Chen ; Zhu, Guang-Pu ; Pi, Hai-Feng ; Shic, Frederick ; Hu, Bing-liang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-a6eb6e14cd12121a1e9e1eff23de249040c11bcf70673cac0ea453805857d5ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Embedding</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Neural networks</topic><topic>Processes</topic><topic>Synthetic data</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Quan</creatorcontrib><creatorcontrib>Wang, Hui</creatorcontrib><creatorcontrib>Dang, Ruo-Chen</creatorcontrib><creatorcontrib>Zhu, Guang-Pu</creatorcontrib><creatorcontrib>Pi, Hai-Feng</creatorcontrib><creatorcontrib>Shic, Frederick</creatorcontrib><creatorcontrib>Hu, Bing-liang</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Quan</au><au>Wang, Hui</au><au>Dang, Ruo-Chen</au><au>Zhu, Guang-Pu</au><au>Pi, Hai-Feng</au><au>Shic, Frederick</au><au>Hu, Bing-liang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2023</date><risdate>2023</risdate><volume>53</volume><issue>2</issue><spage>2026</spage><epage>2041</epage><pages>2026-2041</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer of personal identity (ID-ResNet) that outperformed the current best result of 2.51 ∘ with MPIIGaze data, a benchmark dataset for gaze estimation. To avoid using manually labelled data, we used UnityEye synthetic images with and without style transformation as the training data. We exceeded the previously reported best result with MPIIGaze data (from 2.76 ∘ to 2.55 ∘ ) and UT-Multiview data (from 4.01 ∘ to 3.40 ∘ ). In addition, it only needs to fine-tune with a few ”calibration” examples for a new person to yield significant performance gains. In addition, we presented the KLBS-eye dataset that contains 15,350 images collected from 12 participants while looking in nine known directions and received the state-of-the-art result of (0.59 ± 1.69 ∘ ).</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-03481-9</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-6086-4191</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2023, Vol.53 (2), p.2026-2041
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2760352092
source SpringerLink Journals
subjects Artificial Intelligence
Artificial neural networks
Computer Science
Datasets
Embedding
Machines
Manufacturing
Mechanical Engineering
Neural networks
Processes
Synthetic data
title Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T05%3A50%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Style%20transformed%20synthetic%20images%20for%20real%20world%20gaze%20estimation%20by%20using%20residual%20neural%20network%20with%20embedded%20personal%20identities&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Wang,%20Quan&rft.date=2023&rft.volume=53&rft.issue=2&rft.spage=2026&rft.epage=2041&rft.pages=2026-2041&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-03481-9&rft_dat=%3Cproquest_cross%3E2760352092%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2760352092&rft_id=info:pmid/&rfr_iscdi=true