Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism
Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transfor...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on biometrics, behavior, and identity science behavior, and identity science, 2022-07, Vol.4 (3), p.439-450 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 450 |
---|---|
container_issue | 3 |
container_start_page | 439 |
container_title | IEEE transactions on biometrics, behavior, and identity science |
container_volume | 4 |
creator | Wang, Zhuo Wang, Qiangchang Deng, Weihong Guo, Guodong |
description | Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for FAS. Besides the multi-head self-attention which explores relations among local patches in the same layer, we propose cross-layer relation-aware attentions (CRA) to adaptively integrate local patches from different layers. Furthermore, to effectively fuse hierarchical features, we explore the best hierarchical feature fusion (HFF) structure, which can capture the complementary information between low-level artifacts and high-level semantic features for the spoofing patterns. With these novel modules, TransFAS not only improves the generalization capability of the classical vision transformer, but also achieves SOTA performance on multiple benchmarks, demonstrating the superiority of the transformer-based model for FAS. |
doi_str_mv | 10.1109/TBIOM.2022.3184500 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TBIOM_2022_3184500</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9817442</ieee_id><sourcerecordid>2691875219</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-7d1fd8edbe8eede46537e00d9b7e50e7ad535b330721154dac26a5aae63b52883</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhhdRsNT-Ab0EPKfOfmWzx1qsFloK2uJx2WQnNqXN1t0U8d-b2iLCMDOH95mBh5BbCkNKQT8sH6eL-ZABY0NOcyEBLkiPZVylmQB1-W-_JoMYNwDAQOiuemQ8sSUmo6at07e991XdfCSreOzLYJtY-bDDEJP3ul0nr7i1be2bdPRlAyZzLNe2qePuhlxVdhtxcJ59spo8Lccv6WzxPB2PZmnJtGxT5WjlcnQF5ogORSa5QgCnC4USUFknuSw4B8UolcLZkmVWWosZLyTLc94n96e7--A_Dxhbs_GH0HQvDcs0zZVkVHcpdkqVwccYsDL7UO9s-DYUzNGX-fVljr7M2VcH3Z2gGhH_AJ1TJQTjP5bWZgM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2691875219</pqid></control><display><type>article</type><title>Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Zhuo ; Wang, Qiangchang ; Deng, Weihong ; Guo, Guodong</creator><creatorcontrib>Wang, Zhuo ; Wang, Qiangchang ; Deng, Weihong ; Guo, Guodong</creatorcontrib><description>Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for FAS. Besides the multi-head self-attention which explores relations among local patches in the same layer, we propose cross-layer relation-aware attentions (CRA) to adaptively integrate local patches from different layers. Furthermore, to effectively fuse hierarchical features, we explore the best hierarchical feature fusion (HFF) structure, which can capture the complementary information between low-level artifacts and high-level semantic features for the spoofing patterns. With these novel modules, TransFAS not only improves the generalization capability of the classical vision transformer, but also achieves SOTA performance on multiple benchmarks, demonstrating the superiority of the transformer-based model for FAS.</description><identifier>ISSN: 2637-6407</identifier><identifier>EISSN: 2637-6407</identifier><identifier>DOI: 10.1109/TBIOM.2022.3184500</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>cross-layer relation-aware attentions ; Face anti-spoofing ; Face recognition ; Faces ; Feature extraction ; hierarchical feature fusion ; Machine learning ; Representation learning ; Semantics ; Spoofing ; Structural hierarchy ; Task analysis ; transformer networks ; Transformers</subject><ispartof>IEEE transactions on biometrics, behavior, and identity science, 2022-07, Vol.4 (3), p.439-450</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-7d1fd8edbe8eede46537e00d9b7e50e7ad535b330721154dac26a5aae63b52883</citedby><cites>FETCH-LOGICAL-c295t-7d1fd8edbe8eede46537e00d9b7e50e7ad535b330721154dac26a5aae63b52883</cites><orcidid>0000-0002-8078-9077</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9817442$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9817442$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Zhuo</creatorcontrib><creatorcontrib>Wang, Qiangchang</creatorcontrib><creatorcontrib>Deng, Weihong</creatorcontrib><creatorcontrib>Guo, Guodong</creatorcontrib><title>Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism</title><title>IEEE transactions on biometrics, behavior, and identity science</title><addtitle>TBIOM</addtitle><description>Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for FAS. Besides the multi-head self-attention which explores relations among local patches in the same layer, we propose cross-layer relation-aware attentions (CRA) to adaptively integrate local patches from different layers. Furthermore, to effectively fuse hierarchical features, we explore the best hierarchical feature fusion (HFF) structure, which can capture the complementary information between low-level artifacts and high-level semantic features for the spoofing patterns. With these novel modules, TransFAS not only improves the generalization capability of the classical vision transformer, but also achieves SOTA performance on multiple benchmarks, demonstrating the superiority of the transformer-based model for FAS.</description><subject>cross-layer relation-aware attentions</subject><subject>Face anti-spoofing</subject><subject>Face recognition</subject><subject>Faces</subject><subject>Feature extraction</subject><subject>hierarchical feature fusion</subject><subject>Machine learning</subject><subject>Representation learning</subject><subject>Semantics</subject><subject>Spoofing</subject><subject>Structural hierarchy</subject><subject>Task analysis</subject><subject>transformer networks</subject><subject>Transformers</subject><issn>2637-6407</issn><issn>2637-6407</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1Lw0AQhhdRsNT-Ab0EPKfOfmWzx1qsFloK2uJx2WQnNqXN1t0U8d-b2iLCMDOH95mBh5BbCkNKQT8sH6eL-ZABY0NOcyEBLkiPZVylmQB1-W-_JoMYNwDAQOiuemQ8sSUmo6at07e991XdfCSreOzLYJtY-bDDEJP3ul0nr7i1be2bdPRlAyZzLNe2qePuhlxVdhtxcJ59spo8Lccv6WzxPB2PZmnJtGxT5WjlcnQF5ogORSa5QgCnC4USUFknuSw4B8UolcLZkmVWWosZLyTLc94n96e7--A_Dxhbs_GH0HQvDcs0zZVkVHcpdkqVwccYsDL7UO9s-DYUzNGX-fVljr7M2VcH3Z2gGhH_AJ1TJQTjP5bWZgM</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Wang, Zhuo</creator><creator>Wang, Qiangchang</creator><creator>Deng, Weihong</creator><creator>Guo, Guodong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-8078-9077</orcidid></search><sort><creationdate>20220701</creationdate><title>Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism</title><author>Wang, Zhuo ; Wang, Qiangchang ; Deng, Weihong ; Guo, Guodong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-7d1fd8edbe8eede46537e00d9b7e50e7ad535b330721154dac26a5aae63b52883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>cross-layer relation-aware attentions</topic><topic>Face anti-spoofing</topic><topic>Face recognition</topic><topic>Faces</topic><topic>Feature extraction</topic><topic>hierarchical feature fusion</topic><topic>Machine learning</topic><topic>Representation learning</topic><topic>Semantics</topic><topic>Spoofing</topic><topic>Structural hierarchy</topic><topic>Task analysis</topic><topic>transformer networks</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Zhuo</creatorcontrib><creatorcontrib>Wang, Qiangchang</creatorcontrib><creatorcontrib>Deng, Weihong</creatorcontrib><creatorcontrib>Guo, Guodong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on biometrics, behavior, and identity science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Zhuo</au><au>Wang, Qiangchang</au><au>Deng, Weihong</au><au>Guo, Guodong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism</atitle><jtitle>IEEE transactions on biometrics, behavior, and identity science</jtitle><stitle>TBIOM</stitle><date>2022-07-01</date><risdate>2022</risdate><volume>4</volume><issue>3</issue><spage>439</spage><epage>450</epage><pages>439-450</pages><issn>2637-6407</issn><eissn>2637-6407</eissn><abstract>Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for FAS. Besides the multi-head self-attention which explores relations among local patches in the same layer, we propose cross-layer relation-aware attentions (CRA) to adaptively integrate local patches from different layers. Furthermore, to effectively fuse hierarchical features, we explore the best hierarchical feature fusion (HFF) structure, which can capture the complementary information between low-level artifacts and high-level semantic features for the spoofing patterns. With these novel modules, TransFAS not only improves the generalization capability of the classical vision transformer, but also achieves SOTA performance on multiple benchmarks, demonstrating the superiority of the transformer-based model for FAS.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TBIOM.2022.3184500</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-8078-9077</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2637-6407 |
ispartof | IEEE transactions on biometrics, behavior, and identity science, 2022-07, Vol.4 (3), p.439-450 |
issn | 2637-6407 2637-6407 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TBIOM_2022_3184500 |
source | IEEE Electronic Library (IEL) |
subjects | cross-layer relation-aware attentions Face anti-spoofing Face recognition Faces Feature extraction hierarchical feature fusion Machine learning Representation learning Semantics Spoofing Structural hierarchy Task analysis transformer networks Transformers |
title | Face Anti-Spoofing Using Transformers With Relation-Aware Mechanism |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T16%3A56%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Face%20Anti-Spoofing%20Using%20Transformers%20With%20Relation-Aware%20Mechanism&rft.jtitle=IEEE%20transactions%20on%20biometrics,%20behavior,%20and%20identity%20science&rft.au=Wang,%20Zhuo&rft.date=2022-07-01&rft.volume=4&rft.issue=3&rft.spage=439&rft.epage=450&rft.pages=439-450&rft.issn=2637-6407&rft.eissn=2637-6407&rft_id=info:doi/10.1109/TBIOM.2022.3184500&rft_dat=%3Cproquest_RIE%3E2691875219%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2691875219&rft_id=info:pmid/&rft_ieee_id=9817442&rfr_iscdi=true |