Complementary Cohort Strategy for Multimodal Face Pair Matching
Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Toward the issue of few available images p...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on information forensics and security 2016-05, Vol.11 (5), p.937-950 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 950 |
---|---|
container_issue | 5 |
container_start_page | 937 |
container_title | IEEE transactions on information forensics and security |
container_volume | 11 |
creator | Yunlian Sun Nasrollahi, Kamal Zhenan Sun Tieniu Tan |
description | Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Toward the issue of few available images provided to represent each face, we propose to exploit an extra cohort set (identities in the cohort set are different from those being compared) by a series of cohort list comparisons. Useful cohort coefficients are then extracted from both sorted cohort identities and sorted cohort images for complementary information. To augment its robustness to complicated facial variations, we further employ multiple face modalities owing to their complementary value to each other for the face pair matching task. The final decision is made by fusing the extracted cohort coefficients with the direct matching score for all the available face modalities. To investigate the capacity of each individual modality on matching faces, the cohort behavior, and the performance achieved using our complementary cohort strategy, we conduct a set of experiments on two recently collected multimodal face databases. It is shown that using different modalities leads to different face pair matching performance. For each modality, employing our cohort scheme significantly reduces the equal error rate. By applying the proposed multimodal complementary cohort strategy, we achieve the best performance on our face pair matching task. |
doi_str_mv | 10.1109/TIFS.2015.2512561 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIFS_2015_2512561</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7366578</ieee_id><sourcerecordid>4047841081</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-ac03a8ed184bf05f91101db71540f36b4c21b59f14d3602f2382180c3b44a9dd3</originalsourceid><addsrcrecordid>eNo9kE9LAzEQxYMoWKsfQLwseN6ayb_NnkQWq4WKQus5ZLNJu2W3qdn00G9vSktPMwzvzbz5IfQIeAKAy5flbLqYEAx8QjgQLuAKjYBzkQtM4PrSA71Fd8OwwZgxEHKEXivf7zrb223U4ZBVfu1DzBYx6GhXh8z5kH3tu9j2vtFdNtXGZj-6TUMdzbrdru7RjdPdYB_OdYx-p-_L6jOff3_Mqrd5bigVMdcGUy1tA5LVDnNXptDQ1AVwhh0VNTMEal46YA1NiR2hkoDEhtaM6bJp6Bg9n_bugv_b2yGqjd-HbTqpoJAF4YUUMqngpDLBD0OwTu1C26fHFGB15KSOnNSRkzpzSp6nk6e11l70BRUi7aT_rBZijw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1787257868</pqid></control><display><type>article</type><title>Complementary Cohort Strategy for Multimodal Face Pair Matching</title><source>IEEE Electronic Library (IEL)</source><creator>Yunlian Sun ; Nasrollahi, Kamal ; Zhenan Sun ; Tieniu Tan</creator><creatorcontrib>Yunlian Sun ; Nasrollahi, Kamal ; Zhenan Sun ; Tieniu Tan</creatorcontrib><description>Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Toward the issue of few available images provided to represent each face, we propose to exploit an extra cohort set (identities in the cohort set are different from those being compared) by a series of cohort list comparisons. Useful cohort coefficients are then extracted from both sorted cohort identities and sorted cohort images for complementary information. To augment its robustness to complicated facial variations, we further employ multiple face modalities owing to their complementary value to each other for the face pair matching task. The final decision is made by fusing the extracted cohort coefficients with the direct matching score for all the available face modalities. To investigate the capacity of each individual modality on matching faces, the cohort behavior, and the performance achieved using our complementary cohort strategy, we conduct a set of experiments on two recently collected multimodal face databases. It is shown that using different modalities leads to different face pair matching performance. For each modality, employing our cohort scheme significantly reduces the equal error rate. By applying the proposed multimodal complementary cohort strategy, we achieve the best performance on our face pair matching task.</description><identifier>ISSN: 1556-6013</identifier><identifier>EISSN: 1556-6021</identifier><identifier>DOI: 10.1109/TIFS.2015.2512561</identifier><identifier>CODEN: ITIFA6</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>cohort information ; Computer vision ; Estimating techniques ; Face ; Face recognition ; multimodal fusion ; RGB-D ; SAT assessment ; Sun ; Three-dimensional displays ; Training</subject><ispartof>IEEE transactions on information forensics and security, 2016-05, Vol.11 (5), p.937-950</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-ac03a8ed184bf05f91101db71540f36b4c21b59f14d3602f2382180c3b44a9dd3</citedby><cites>FETCH-LOGICAL-c336t-ac03a8ed184bf05f91101db71540f36b4c21b59f14d3602f2382180c3b44a9dd3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7366578$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,778,782,794,27907,27908,54741</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7366578$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yunlian Sun</creatorcontrib><creatorcontrib>Nasrollahi, Kamal</creatorcontrib><creatorcontrib>Zhenan Sun</creatorcontrib><creatorcontrib>Tieniu Tan</creatorcontrib><title>Complementary Cohort Strategy for Multimodal Face Pair Matching</title><title>IEEE transactions on information forensics and security</title><addtitle>TIFS</addtitle><description>Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Toward the issue of few available images provided to represent each face, we propose to exploit an extra cohort set (identities in the cohort set are different from those being compared) by a series of cohort list comparisons. Useful cohort coefficients are then extracted from both sorted cohort identities and sorted cohort images for complementary information. To augment its robustness to complicated facial variations, we further employ multiple face modalities owing to their complementary value to each other for the face pair matching task. The final decision is made by fusing the extracted cohort coefficients with the direct matching score for all the available face modalities. To investigate the capacity of each individual modality on matching faces, the cohort behavior, and the performance achieved using our complementary cohort strategy, we conduct a set of experiments on two recently collected multimodal face databases. It is shown that using different modalities leads to different face pair matching performance. For each modality, employing our cohort scheme significantly reduces the equal error rate. By applying the proposed multimodal complementary cohort strategy, we achieve the best performance on our face pair matching task.</description><subject>cohort information</subject><subject>Computer vision</subject><subject>Estimating techniques</subject><subject>Face</subject><subject>Face recognition</subject><subject>multimodal fusion</subject><subject>RGB-D</subject><subject>SAT assessment</subject><subject>Sun</subject><subject>Three-dimensional displays</subject><subject>Training</subject><issn>1556-6013</issn><issn>1556-6021</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE9LAzEQxYMoWKsfQLwseN6ayb_NnkQWq4WKQus5ZLNJu2W3qdn00G9vSktPMwzvzbz5IfQIeAKAy5flbLqYEAx8QjgQLuAKjYBzkQtM4PrSA71Fd8OwwZgxEHKEXivf7zrb223U4ZBVfu1DzBYx6GhXh8z5kH3tu9j2vtFdNtXGZj-6TUMdzbrdru7RjdPdYB_OdYx-p-_L6jOff3_Mqrd5bigVMdcGUy1tA5LVDnNXptDQ1AVwhh0VNTMEal46YA1NiR2hkoDEhtaM6bJp6Bg9n_bugv_b2yGqjd-HbTqpoJAF4YUUMqngpDLBD0OwTu1C26fHFGB15KSOnNSRkzpzSp6nk6e11l70BRUi7aT_rBZijw</recordid><startdate>20160501</startdate><enddate>20160501</enddate><creator>Yunlian Sun</creator><creator>Nasrollahi, Kamal</creator><creator>Zhenan Sun</creator><creator>Tieniu Tan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20160501</creationdate><title>Complementary Cohort Strategy for Multimodal Face Pair Matching</title><author>Yunlian Sun ; Nasrollahi, Kamal ; Zhenan Sun ; Tieniu Tan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-ac03a8ed184bf05f91101db71540f36b4c21b59f14d3602f2382180c3b44a9dd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>cohort information</topic><topic>Computer vision</topic><topic>Estimating techniques</topic><topic>Face</topic><topic>Face recognition</topic><topic>multimodal fusion</topic><topic>RGB-D</topic><topic>SAT assessment</topic><topic>Sun</topic><topic>Three-dimensional displays</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yunlian Sun</creatorcontrib><creatorcontrib>Nasrollahi, Kamal</creatorcontrib><creatorcontrib>Zhenan Sun</creatorcontrib><creatorcontrib>Tieniu Tan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on information forensics and security</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yunlian Sun</au><au>Nasrollahi, Kamal</au><au>Zhenan Sun</au><au>Tieniu Tan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Complementary Cohort Strategy for Multimodal Face Pair Matching</atitle><jtitle>IEEE transactions on information forensics and security</jtitle><stitle>TIFS</stitle><date>2016-05-01</date><risdate>2016</risdate><volume>11</volume><issue>5</issue><spage>937</spage><epage>950</epage><pages>937-950</pages><issn>1556-6013</issn><eissn>1556-6021</eissn><coden>ITIFA6</coden><abstract>Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Toward the issue of few available images provided to represent each face, we propose to exploit an extra cohort set (identities in the cohort set are different from those being compared) by a series of cohort list comparisons. Useful cohort coefficients are then extracted from both sorted cohort identities and sorted cohort images for complementary information. To augment its robustness to complicated facial variations, we further employ multiple face modalities owing to their complementary value to each other for the face pair matching task. The final decision is made by fusing the extracted cohort coefficients with the direct matching score for all the available face modalities. To investigate the capacity of each individual modality on matching faces, the cohort behavior, and the performance achieved using our complementary cohort strategy, we conduct a set of experiments on two recently collected multimodal face databases. It is shown that using different modalities leads to different face pair matching performance. For each modality, employing our cohort scheme significantly reduces the equal error rate. By applying the proposed multimodal complementary cohort strategy, we achieve the best performance on our face pair matching task.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIFS.2015.2512561</doi><tpages>14</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1556-6013 |
ispartof | IEEE transactions on information forensics and security, 2016-05, Vol.11 (5), p.937-950 |
issn | 1556-6013 1556-6021 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TIFS_2015_2512561 |
source | IEEE Electronic Library (IEL) |
subjects | cohort information Computer vision Estimating techniques Face Face recognition multimodal fusion RGB-D SAT assessment Sun Three-dimensional displays Training |
title | Complementary Cohort Strategy for Multimodal Face Pair Matching |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T04%3A06%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Complementary%20Cohort%20Strategy%20for%20Multimodal%20Face%20Pair%20Matching&rft.jtitle=IEEE%20transactions%20on%20information%20forensics%20and%20security&rft.au=Yunlian%20Sun&rft.date=2016-05-01&rft.volume=11&rft.issue=5&rft.spage=937&rft.epage=950&rft.pages=937-950&rft.issn=1556-6013&rft.eissn=1556-6021&rft.coden=ITIFA6&rft_id=info:doi/10.1109/TIFS.2015.2512561&rft_dat=%3Cproquest_RIE%3E4047841081%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1787257868&rft_id=info:pmid/&rft_ieee_id=7366578&rfr_iscdi=true |