A multi-texture approach for estimating iris positions in the eye using 2.5D Active Appearance Models
This paper describes a new approach for the detection of the iris center. Starting from a learning base that only contains people in frontal view and looking in front of them, our model (based on 2.5D Active Appearance Models (AAM)) is capable of capturing the iris movements for both people in front...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1836 |
---|---|
container_issue | |
container_start_page | 1833 |
container_title | |
container_volume | |
creator | Salam, H. Stoiber, N. Seguier, R. |
description | This paper describes a new approach for the detection of the iris center. Starting from a learning base that only contains people in frontal view and looking in front of them, our model (based on 2.5D Active Appearance Models (AAM)) is capable of capturing the iris movements for both people in frontal view and with different head poses. We merge an iris model and a local eye model where holes are put in the place of the white-iris region. The iris texture slides under the eye hole permitting to synthesize and thus analyze any gaze direction. We propose a multi-objective optimization technique to deal with large head poses. We compared our method to a 2.5D AAM trained on faces with different gaze directions and showed that our proposition outperforms it in robustness and accuracy of detection specifically when head pose varies and with subjects wearing eyeglasses. |
doi_str_mv | 10.1109/ICIP.2012.6467239 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6467239</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6467239</ieee_id><sourcerecordid>6467239</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-5a66bb688c6ddd56e05a00c574819f86fe5afcc5d376bb70104e7d72da45a9703</originalsourceid><addsrcrecordid>eNo1kMlOwzAYhM0m0ZY-AOLiF0jwbucYlS1SERzgXLn2H2rUJpHtIvr2BFFOo9GMPo0GoWtKSkpJddssmteSEcpKJZRmvDpB80obOhrOJOfsFE0YN7QwUlRnaPofCHqOJlQyVghjyCWapvRJyAjidIKgxrv9Nociw3feR8B2GGJv3Qa3fcSQctjZHLoPHGJIeOhTyKHvEg4dzhvAcAC8T785K-Udrl0OX4DrYQAbbecAP_cetukKXbR2m2B-1Bl6f7h_WzwVy5fHZlEvi0C1zIW0Sq3XyhinvPdSAZGWECe1MLRqjWpB2tY56bkee5pQIkB7zbwV0laa8Bm6-eMGAFgNcRwfD6vjXfwHDzJalA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A multi-texture approach for estimating iris positions in the eye using 2.5D Active Appearance Models</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Salam, H. ; Stoiber, N. ; Seguier, R.</creator><creatorcontrib>Salam, H. ; Stoiber, N. ; Seguier, R.</creatorcontrib><description>This paper describes a new approach for the detection of the iris center. Starting from a learning base that only contains people in frontal view and looking in front of them, our model (based on 2.5D Active Appearance Models (AAM)) is capable of capturing the iris movements for both people in frontal view and with different head poses. We merge an iris model and a local eye model where holes are put in the place of the white-iris region. The iris texture slides under the eye hole permitting to synthesize and thus analyze any gaze direction. We propose a multi-objective optimization technique to deal with large head poses. We compared our method to a 2.5D AAM trained on faces with different gaze directions and showed that our proposition outperforms it in robustness and accuracy of detection specifically when head pose varies and with subjects wearing eyeglasses.</description><identifier>ISSN: 1522-4880</identifier><identifier>ISBN: 1467325341</identifier><identifier>ISBN: 9781467325349</identifier><identifier>EISSN: 2381-8549</identifier><identifier>EISBN: 9781467325332</identifier><identifier>EISBN: 1467325325</identifier><identifier>EISBN: 9781467325325</identifier><identifier>EISBN: 1467325333</identifier><identifier>DOI: 10.1109/ICIP.2012.6467239</identifier><language>eng</language><publisher>IEEE</publisher><subject>Active appearance model ; gaze detection ; Head ; Iris ; Iris recognition ; iris tracking ; Optimization ; Skin</subject><ispartof>2012 19th IEEE International Conference on Image Processing, 2012, p.1833-1836</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6467239$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>310,311,781,785,790,791,2059,27930,54925</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6467239$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Salam, H.</creatorcontrib><creatorcontrib>Stoiber, N.</creatorcontrib><creatorcontrib>Seguier, R.</creatorcontrib><title>A multi-texture approach for estimating iris positions in the eye using 2.5D Active Appearance Models</title><title>2012 19th IEEE International Conference on Image Processing</title><addtitle>ICIP</addtitle><description>This paper describes a new approach for the detection of the iris center. Starting from a learning base that only contains people in frontal view and looking in front of them, our model (based on 2.5D Active Appearance Models (AAM)) is capable of capturing the iris movements for both people in frontal view and with different head poses. We merge an iris model and a local eye model where holes are put in the place of the white-iris region. The iris texture slides under the eye hole permitting to synthesize and thus analyze any gaze direction. We propose a multi-objective optimization technique to deal with large head poses. We compared our method to a 2.5D AAM trained on faces with different gaze directions and showed that our proposition outperforms it in robustness and accuracy of detection specifically when head pose varies and with subjects wearing eyeglasses.</description><subject>Active appearance model</subject><subject>gaze detection</subject><subject>Head</subject><subject>Iris</subject><subject>Iris recognition</subject><subject>iris tracking</subject><subject>Optimization</subject><subject>Skin</subject><issn>1522-4880</issn><issn>2381-8549</issn><isbn>1467325341</isbn><isbn>9781467325349</isbn><isbn>9781467325332</isbn><isbn>1467325325</isbn><isbn>9781467325325</isbn><isbn>1467325333</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo1kMlOwzAYhM0m0ZY-AOLiF0jwbucYlS1SERzgXLn2H2rUJpHtIvr2BFFOo9GMPo0GoWtKSkpJddssmteSEcpKJZRmvDpB80obOhrOJOfsFE0YN7QwUlRnaPofCHqOJlQyVghjyCWapvRJyAjidIKgxrv9Nociw3feR8B2GGJv3Qa3fcSQctjZHLoPHGJIeOhTyKHvEg4dzhvAcAC8T785K-Udrl0OX4DrYQAbbecAP_cetukKXbR2m2B-1Bl6f7h_WzwVy5fHZlEvi0C1zIW0Sq3XyhinvPdSAZGWECe1MLRqjWpB2tY56bkee5pQIkB7zbwV0laa8Bm6-eMGAFgNcRwfD6vjXfwHDzJalA</recordid><startdate>201209</startdate><enddate>201209</enddate><creator>Salam, H.</creator><creator>Stoiber, N.</creator><creator>Seguier, R.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201209</creationdate><title>A multi-texture approach for estimating iris positions in the eye using 2.5D Active Appearance Models</title><author>Salam, H. ; Stoiber, N. ; Seguier, R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-5a66bb688c6ddd56e05a00c574819f86fe5afcc5d376bb70104e7d72da45a9703</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Active appearance model</topic><topic>gaze detection</topic><topic>Head</topic><topic>Iris</topic><topic>Iris recognition</topic><topic>iris tracking</topic><topic>Optimization</topic><topic>Skin</topic><toplevel>online_resources</toplevel><creatorcontrib>Salam, H.</creatorcontrib><creatorcontrib>Stoiber, N.</creatorcontrib><creatorcontrib>Seguier, R.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Salam, H.</au><au>Stoiber, N.</au><au>Seguier, R.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A multi-texture approach for estimating iris positions in the eye using 2.5D Active Appearance Models</atitle><btitle>2012 19th IEEE International Conference on Image Processing</btitle><stitle>ICIP</stitle><date>2012-09</date><risdate>2012</risdate><spage>1833</spage><epage>1836</epage><pages>1833-1836</pages><issn>1522-4880</issn><eissn>2381-8549</eissn><isbn>1467325341</isbn><isbn>9781467325349</isbn><eisbn>9781467325332</eisbn><eisbn>1467325325</eisbn><eisbn>9781467325325</eisbn><eisbn>1467325333</eisbn><abstract>This paper describes a new approach for the detection of the iris center. Starting from a learning base that only contains people in frontal view and looking in front of them, our model (based on 2.5D Active Appearance Models (AAM)) is capable of capturing the iris movements for both people in frontal view and with different head poses. We merge an iris model and a local eye model where holes are put in the place of the white-iris region. The iris texture slides under the eye hole permitting to synthesize and thus analyze any gaze direction. We propose a multi-objective optimization technique to deal with large head poses. We compared our method to a 2.5D AAM trained on faces with different gaze directions and showed that our proposition outperforms it in robustness and accuracy of detection specifically when head pose varies and with subjects wearing eyeglasses.</abstract><pub>IEEE</pub><doi>10.1109/ICIP.2012.6467239</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1522-4880 |
ispartof | 2012 19th IEEE International Conference on Image Processing, 2012, p.1833-1836 |
issn | 1522-4880 2381-8549 |
language | eng |
recordid | cdi_ieee_primary_6467239 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Active appearance model gaze detection Head Iris Iris recognition iris tracking Optimization Skin |
title | A multi-texture approach for estimating iris positions in the eye using 2.5D Active Appearance Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T07%3A34%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20multi-texture%20approach%20for%20estimating%20iris%20positions%20in%20the%20eye%20using%202.5D%20Active%20Appearance%20Models&rft.btitle=2012%2019th%20IEEE%20International%20Conference%20on%20Image%20Processing&rft.au=Salam,%20H.&rft.date=2012-09&rft.spage=1833&rft.epage=1836&rft.pages=1833-1836&rft.issn=1522-4880&rft.eissn=2381-8549&rft.isbn=1467325341&rft.isbn_list=9781467325349&rft_id=info:doi/10.1109/ICIP.2012.6467239&rft_dat=%3Cieee_6IE%3E6467239%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781467325332&rft.eisbn_list=1467325325&rft.eisbn_list=9781467325325&rft.eisbn_list=1467325333&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6467239&rfr_iscdi=true |