FaceHack: Attacking Facial Recognition Systems Using Malicious Facial Characteristics
Recent advances in machine learning have opened up new avenues for its extensive use in real-world applications. Facial recognition, specifically, is used from simple friend suggestions in social-media platforms to critical security applications for biometric validation in automated border control a...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on biometrics, behavior, and identity science behavior, and identity science, 2022-07, Vol.4 (3), p.361-372 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 372 |
---|---|
container_issue | 3 |
container_start_page | 361 |
container_title | IEEE transactions on biometrics, behavior, and identity science |
container_volume | 4 |
creator | Sarkar, Esha Benkraouda, Hadjer Krishnan, Gopika Gamil, Homer Maniatakos, Michail |
description | Recent advances in machine learning have opened up new avenues for its extensive use in real-world applications. Facial recognition, specifically, is used from simple friend suggestions in social-media platforms to critical security applications for biometric validation in automated border control at airports. Considering these scenarios, security vulnerabilities of such facial recognition systems pose serious threats with severe outcomes. Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks; in other words, the DNNs turn malicious in the presence of a unique trigger. Detection mechanisms have focused on identifying these distinct trigger-based outliers statistically or through reconstructing them. In this work, we propose the use of facial characteristics as triggers to backdoored facial recognition systems. Additionally, we demonstrate that these attacks can be realised on real-time facial recognition systems. Depending on the attack scenario, the changes in the facial attributes may be embedded artificially using social-media filters or introduced naturally through facial muscle movements. We evaluate the success of the attack and validate that it does not interfere with the performance criteria of the model. We also substantiate that our triggers are undetectable by thoroughly testing them on state-of-the-art defense and detection mechanisms. |
doi_str_mv | 10.1109/TBIOM.2021.3132132 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9632692</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9632692</ieee_id><sourcerecordid>2691875859</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-3b410311836a7bfe7410b870de554ecb29247fb10db6562f06bca63778775b973</originalsourceid><addsrcrecordid>eNpNUE1PAjEQbYwmEuQP6GUTz4v92LZbb0hUSCAkCuemLV0sLrvYlgP_3uKiMXnJm5m8N5N5ANwiOEQIiofl03QxH2KI0ZAgghMuQA8zwnNWQH75r74GgxC2EEIMC5HQA6sXZexEmc_HbBRjYtdssjRzqs7erGk3jYuubbL3Y4h2F7JVOAnmqnbGtYfwKx1_KK9MtN6F6Ey4AVeVqoMdnLmf7jwvx5N8tnidjkez3GBBY050gSBBqCRMcV1Znlpdcri2lBbWaCxwwSuN4FozynAFmTYq_cJLzqkWnPTBfbd379uvgw1RbtuDb9JJiZlAJaclFUmFO5XxbQjeVnLv3U75o0RQnhKUPwnKU4LynGAy3XUmZ639MwhG0mJMvgF7Gmt4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2691875859</pqid></control><display><type>article</type><title>FaceHack: Attacking Facial Recognition Systems Using Malicious Facial Characteristics</title><source>IEEE Electronic Library (IEL)</source><creator>Sarkar, Esha ; Benkraouda, Hadjer ; Krishnan, Gopika ; Gamil, Homer ; Maniatakos, Michail</creator><creatorcontrib>Sarkar, Esha ; Benkraouda, Hadjer ; Krishnan, Gopika ; Gamil, Homer ; Maniatakos, Michail</creatorcontrib><description>Recent advances in machine learning have opened up new avenues for its extensive use in real-world applications. Facial recognition, specifically, is used from simple friend suggestions in social-media platforms to critical security applications for biometric validation in automated border control at airports. Considering these scenarios, security vulnerabilities of such facial recognition systems pose serious threats with severe outcomes. Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks; in other words, the DNNs turn malicious in the presence of a unique trigger. Detection mechanisms have focused on identifying these distinct trigger-based outliers statistically or through reconstructing them. In this work, we propose the use of facial characteristics as triggers to backdoored facial recognition systems. Additionally, we demonstrate that these attacks can be realised on real-time facial recognition systems. Depending on the attack scenario, the changes in the facial attributes may be embedded artificially using social-media filters or introduced naturally through facial muscle movements. We evaluate the success of the attack and validate that it does not interfere with the performance criteria of the model. We also substantiate that our triggers are undetectable by thoroughly testing them on state-of-the-art defense and detection mechanisms.</description><identifier>ISSN: 2637-6407</identifier><identifier>EISSN: 2637-6407</identifier><identifier>DOI: 10.1109/TBIOM.2021.3132132</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Airline security ; Airports ; Artificial neural networks ; attack ; Automatic control ; backdoor ; Computational modeling ; Data analysis ; Data models ; Face recognition ; facial recognition ; Facial recognition technology ; Machine learning ; Muscles ; Neurons ; Outliers (statistics) ; privacy ; Real-time systems ; Security ; Shape ; Training ; trojan</subject><ispartof>IEEE transactions on biometrics, behavior, and identity science, 2022-07, Vol.4 (3), p.361-372</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-3b410311836a7bfe7410b870de554ecb29247fb10db6562f06bca63778775b973</citedby><cites>FETCH-LOGICAL-c295t-3b410311836a7bfe7410b870de554ecb29247fb10db6562f06bca63778775b973</cites><orcidid>0000-0001-5511-3182 ; 0000-0003-3646-2920 ; 0000-0001-6899-0651 ; 0000-0003-3256-783X ; 0000-0002-4473-7368</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9632692$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9632692$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Sarkar, Esha</creatorcontrib><creatorcontrib>Benkraouda, Hadjer</creatorcontrib><creatorcontrib>Krishnan, Gopika</creatorcontrib><creatorcontrib>Gamil, Homer</creatorcontrib><creatorcontrib>Maniatakos, Michail</creatorcontrib><title>FaceHack: Attacking Facial Recognition Systems Using Malicious Facial Characteristics</title><title>IEEE transactions on biometrics, behavior, and identity science</title><addtitle>TBIOM</addtitle><description>Recent advances in machine learning have opened up new avenues for its extensive use in real-world applications. Facial recognition, specifically, is used from simple friend suggestions in social-media platforms to critical security applications for biometric validation in automated border control at airports. Considering these scenarios, security vulnerabilities of such facial recognition systems pose serious threats with severe outcomes. Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks; in other words, the DNNs turn malicious in the presence of a unique trigger. Detection mechanisms have focused on identifying these distinct trigger-based outliers statistically or through reconstructing them. In this work, we propose the use of facial characteristics as triggers to backdoored facial recognition systems. Additionally, we demonstrate that these attacks can be realised on real-time facial recognition systems. Depending on the attack scenario, the changes in the facial attributes may be embedded artificially using social-media filters or introduced naturally through facial muscle movements. We evaluate the success of the attack and validate that it does not interfere with the performance criteria of the model. We also substantiate that our triggers are undetectable by thoroughly testing them on state-of-the-art defense and detection mechanisms.</description><subject>Airline security</subject><subject>Airports</subject><subject>Artificial neural networks</subject><subject>attack</subject><subject>Automatic control</subject><subject>backdoor</subject><subject>Computational modeling</subject><subject>Data analysis</subject><subject>Data models</subject><subject>Face recognition</subject><subject>facial recognition</subject><subject>Facial recognition technology</subject><subject>Machine learning</subject><subject>Muscles</subject><subject>Neurons</subject><subject>Outliers (statistics)</subject><subject>privacy</subject><subject>Real-time systems</subject><subject>Security</subject><subject>Shape</subject><subject>Training</subject><subject>trojan</subject><issn>2637-6407</issn><issn>2637-6407</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNUE1PAjEQbYwmEuQP6GUTz4v92LZbb0hUSCAkCuemLV0sLrvYlgP_3uKiMXnJm5m8N5N5ANwiOEQIiofl03QxH2KI0ZAgghMuQA8zwnNWQH75r74GgxC2EEIMC5HQA6sXZexEmc_HbBRjYtdssjRzqs7erGk3jYuubbL3Y4h2F7JVOAnmqnbGtYfwKx1_KK9MtN6F6Ey4AVeVqoMdnLmf7jwvx5N8tnidjkez3GBBY050gSBBqCRMcV1Znlpdcri2lBbWaCxwwSuN4FozynAFmTYq_cJLzqkWnPTBfbd379uvgw1RbtuDb9JJiZlAJaclFUmFO5XxbQjeVnLv3U75o0RQnhKUPwnKU4LynGAy3XUmZ639MwhG0mJMvgF7Gmt4</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Sarkar, Esha</creator><creator>Benkraouda, Hadjer</creator><creator>Krishnan, Gopika</creator><creator>Gamil, Homer</creator><creator>Maniatakos, Michail</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-5511-3182</orcidid><orcidid>https://orcid.org/0000-0003-3646-2920</orcidid><orcidid>https://orcid.org/0000-0001-6899-0651</orcidid><orcidid>https://orcid.org/0000-0003-3256-783X</orcidid><orcidid>https://orcid.org/0000-0002-4473-7368</orcidid></search><sort><creationdate>20220701</creationdate><title>FaceHack: Attacking Facial Recognition Systems Using Malicious Facial Characteristics</title><author>Sarkar, Esha ; Benkraouda, Hadjer ; Krishnan, Gopika ; Gamil, Homer ; Maniatakos, Michail</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-3b410311836a7bfe7410b870de554ecb29247fb10db6562f06bca63778775b973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Airline security</topic><topic>Airports</topic><topic>Artificial neural networks</topic><topic>attack</topic><topic>Automatic control</topic><topic>backdoor</topic><topic>Computational modeling</topic><topic>Data analysis</topic><topic>Data models</topic><topic>Face recognition</topic><topic>facial recognition</topic><topic>Facial recognition technology</topic><topic>Machine learning</topic><topic>Muscles</topic><topic>Neurons</topic><topic>Outliers (statistics)</topic><topic>privacy</topic><topic>Real-time systems</topic><topic>Security</topic><topic>Shape</topic><topic>Training</topic><topic>trojan</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sarkar, Esha</creatorcontrib><creatorcontrib>Benkraouda, Hadjer</creatorcontrib><creatorcontrib>Krishnan, Gopika</creatorcontrib><creatorcontrib>Gamil, Homer</creatorcontrib><creatorcontrib>Maniatakos, Michail</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on biometrics, behavior, and identity science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sarkar, Esha</au><au>Benkraouda, Hadjer</au><au>Krishnan, Gopika</au><au>Gamil, Homer</au><au>Maniatakos, Michail</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FaceHack: Attacking Facial Recognition Systems Using Malicious Facial Characteristics</atitle><jtitle>IEEE transactions on biometrics, behavior, and identity science</jtitle><stitle>TBIOM</stitle><date>2022-07-01</date><risdate>2022</risdate><volume>4</volume><issue>3</issue><spage>361</spage><epage>372</epage><pages>361-372</pages><issn>2637-6407</issn><eissn>2637-6407</eissn><abstract>Recent advances in machine learning have opened up new avenues for its extensive use in real-world applications. Facial recognition, specifically, is used from simple friend suggestions in social-media platforms to critical security applications for biometric validation in automated border control at airports. Considering these scenarios, security vulnerabilities of such facial recognition systems pose serious threats with severe outcomes. Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks; in other words, the DNNs turn malicious in the presence of a unique trigger. Detection mechanisms have focused on identifying these distinct trigger-based outliers statistically or through reconstructing them. In this work, we propose the use of facial characteristics as triggers to backdoored facial recognition systems. Additionally, we demonstrate that these attacks can be realised on real-time facial recognition systems. Depending on the attack scenario, the changes in the facial attributes may be embedded artificially using social-media filters or introduced naturally through facial muscle movements. We evaluate the success of the attack and validate that it does not interfere with the performance criteria of the model. We also substantiate that our triggers are undetectable by thoroughly testing them on state-of-the-art defense and detection mechanisms.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TBIOM.2021.3132132</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-5511-3182</orcidid><orcidid>https://orcid.org/0000-0003-3646-2920</orcidid><orcidid>https://orcid.org/0000-0001-6899-0651</orcidid><orcidid>https://orcid.org/0000-0003-3256-783X</orcidid><orcidid>https://orcid.org/0000-0002-4473-7368</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2637-6407 |
ispartof | IEEE transactions on biometrics, behavior, and identity science, 2022-07, Vol.4 (3), p.361-372 |
issn | 2637-6407 2637-6407 |
language | eng |
recordid | cdi_ieee_primary_9632692 |
source | IEEE Electronic Library (IEL) |
subjects | Airline security Airports Artificial neural networks attack Automatic control backdoor Computational modeling Data analysis Data models Face recognition facial recognition Facial recognition technology Machine learning Muscles Neurons Outliers (statistics) privacy Real-time systems Security Shape Training trojan |
title | FaceHack: Attacking Facial Recognition Systems Using Malicious Facial Characteristics |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T05%3A38%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FaceHack:%20Attacking%20Facial%20Recognition%20Systems%20Using%20Malicious%20Facial%20Characteristics&rft.jtitle=IEEE%20transactions%20on%20biometrics,%20behavior,%20and%20identity%20science&rft.au=Sarkar,%20Esha&rft.date=2022-07-01&rft.volume=4&rft.issue=3&rft.spage=361&rft.epage=372&rft.pages=361-372&rft.issn=2637-6407&rft.eissn=2637-6407&rft_id=info:doi/10.1109/TBIOM.2021.3132132&rft_dat=%3Cproquest_RIE%3E2691875859%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2691875859&rft_id=info:pmid/&rft_ieee_id=9632692&rfr_iscdi=true |