Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks

Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-08
Hauptverfasser: Majumdar, Saikat, Mohammad Hossein Samavatian, Barber, Kristin, Teodorescu, Radu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Majumdar, Saikat
Mohammad Hossein Samavatian
Barber, Kristin
Teodorescu, Radu
description Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make small imperceptible modifications to inputs that are sufficient to induce the DNNs to produce the wrong classification. In this paper we propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin). We propose using controlled undervolting of the chip running the inference process in order to introduce a limited number of compute errors. We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial. We evaluate the proposed solution in an FPGA design and through software simulation. We evaluate 10 attacks and show average detection rates of 77% and 90% on two popular DNNs.
doi_str_mv 10.48550/arxiv.2107.09804
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2107_09804</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2554130701</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-eca98f3a20afb939723cab600b7f6e8c8074b0eb75cfaa45b7b3fc4950e8f9e23</originalsourceid><addsrcrecordid>eNotkE1PAjEQhhsTEwnyAzzZxPPibLulu8cNqJhguEA8bqZlikUs2C4b_ffy4WkyyfO-mWcYu8thWJRKwSPGH98NRQ56CFUJxRXrCSnzrCyEuGGDlDYAIEZaKCV77H2ZfFjzZVhR7Hbb9rRg4hj4PGQT6rwlPiFHIRGv1-hDanm96igmjB63_A3thw_EZ4QxnMJ126L9TLfs2uE20eB_9tni-Wkxnmaz-cvruJ5lqESekcWqdBIFoDOVrLSQFs0IwGg3otKWoAsDZLSyDrFQRhvpbFEpoNJVJGSf3V9qz9bNPvovjL_Nyb452x-Jhwuxj7vvA6W22ewOMRxvao4fKHIJGnL5ByanXWQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2554130701</pqid></control><display><type>article</type><title>Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Majumdar, Saikat ; Mohammad Hossein Samavatian ; Barber, Kristin ; Teodorescu, Radu</creator><creatorcontrib>Majumdar, Saikat ; Mohammad Hossein Samavatian ; Barber, Kristin ; Teodorescu, Radu</creatorcontrib><description>Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make small imperceptible modifications to inputs that are sufficient to induce the DNNs to produce the wrong classification. In this paper we propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin). We propose using controlled undervolting of the chip running the inference process in order to introduce a limited number of compute errors. We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial. We evaluate the proposed solution in an FPGA design and through software simulation. We evaluate 10 attacks and show average detection rates of 77% and 90% on two popular DNNs.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2107.09804</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Classification ; Classifiers ; Computer Science - Cryptography and Security ; Computer Science - Hardware Architecture ; Computer Science - Learning ; Evaluation ; Machine learning ; Object recognition</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2107.09804$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/HOST49136.2021.9702287$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Majumdar, Saikat</creatorcontrib><creatorcontrib>Mohammad Hossein Samavatian</creatorcontrib><creatorcontrib>Barber, Kristin</creatorcontrib><creatorcontrib>Teodorescu, Radu</creatorcontrib><title>Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks</title><title>arXiv.org</title><description>Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make small imperceptible modifications to inputs that are sufficient to induce the DNNs to produce the wrong classification. In this paper we propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin). We propose using controlled undervolting of the chip running the inference process in order to introduce a limited number of compute errors. We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial. We evaluate the proposed solution in an FPGA design and through software simulation. We evaluate 10 attacks and show average detection rates of 77% and 90% on two popular DNNs.</description><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Hardware Architecture</subject><subject>Computer Science - Learning</subject><subject>Evaluation</subject><subject>Machine learning</subject><subject>Object recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE1PAjEQhhsTEwnyAzzZxPPibLulu8cNqJhguEA8bqZlikUs2C4b_ffy4WkyyfO-mWcYu8thWJRKwSPGH98NRQ56CFUJxRXrCSnzrCyEuGGDlDYAIEZaKCV77H2ZfFjzZVhR7Hbb9rRg4hj4PGQT6rwlPiFHIRGv1-hDanm96igmjB63_A3thw_EZ4QxnMJ126L9TLfs2uE20eB_9tni-Wkxnmaz-cvruJ5lqESekcWqdBIFoDOVrLSQFs0IwGg3otKWoAsDZLSyDrFQRhvpbFEpoNJVJGSf3V9qz9bNPvovjL_Nyb452x-Jhwuxj7vvA6W22ewOMRxvao4fKHIJGnL5ByanXWQ</recordid><startdate>20210806</startdate><enddate>20210806</enddate><creator>Majumdar, Saikat</creator><creator>Mohammad Hossein Samavatian</creator><creator>Barber, Kristin</creator><creator>Teodorescu, Radu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210806</creationdate><title>Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks</title><author>Majumdar, Saikat ; Mohammad Hossein Samavatian ; Barber, Kristin ; Teodorescu, Radu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-eca98f3a20afb939723cab600b7f6e8c8074b0eb75cfaa45b7b3fc4950e8f9e23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Hardware Architecture</topic><topic>Computer Science - Learning</topic><topic>Evaluation</topic><topic>Machine learning</topic><topic>Object recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Majumdar, Saikat</creatorcontrib><creatorcontrib>Mohammad Hossein Samavatian</creatorcontrib><creatorcontrib>Barber, Kristin</creatorcontrib><creatorcontrib>Teodorescu, Radu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Majumdar, Saikat</au><au>Mohammad Hossein Samavatian</au><au>Barber, Kristin</au><au>Teodorescu, Radu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks</atitle><jtitle>arXiv.org</jtitle><date>2021-08-06</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make small imperceptible modifications to inputs that are sufficient to induce the DNNs to produce the wrong classification. In this paper we propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin). We propose using controlled undervolting of the chip running the inference process in order to introduce a limited number of compute errors. We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial. We evaluate the proposed solution in an FPGA design and through software simulation. We evaluate 10 attacks and show average detection rates of 77% and 90% on two popular DNNs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2107.09804</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-08
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2107_09804
source arXiv.org; Free E- Journals
subjects Artificial neural networks
Classification
Classifiers
Computer Science - Cryptography and Security
Computer Science - Hardware Architecture
Computer Science - Learning
Evaluation
Machine learning
Object recognition
title Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T17%3A59%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Using%20Undervolting%20as%20an%20On-Device%20Defense%20Against%20Adversarial%20Machine%20Learning%20Attacks&rft.jtitle=arXiv.org&rft.au=Majumdar,%20Saikat&rft.date=2021-08-06&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2107.09804&rft_dat=%3Cproquest_arxiv%3E2554130701%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2554130701&rft_id=info:pmid/&rfr_iscdi=true