ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples, generative adversarial networks-based de...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.33602-33615 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 33615 |
---|---|
container_issue | |
container_start_page | 33602 |
container_title | IEEE access |
container_volume | 10 |
creator | Choi, Seok-Hwan Shin, Jin-Myeong Liu, Peng Choi, Yoon-Ho |
description | An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples, generative adversarial networks-based defense methods have recently been studied. However, the performance of the state-of-the-art generative adversarial networks-based defense methods is limited because the target deep neural network models with generative adversarial networks-based defense methods are robust against adversarial examples but make a false decision for legitimate input data . To solve the accuracy degradation of the generative adversarial networks-based defense methods for legitimate input data , we propose a new generative adversarial networks-based defense method, which is called Adversarially Robust Generative Adversarial Networks(ARGAN). While converting input data to machine learning models using the two-step transformation architecture, ARGAN learns the generator model to reflect the vulnerability of the target deep neural network model against adversarial examples and optimizes parameter values of the generator model for a joint loss function. From the experimental results under various datasets collected from diverse applications, we show that the accuracy of ARGAN for legitimate input data is good-enough while keeping the target deep neural network model robust against adversarial examples . We also show that the accuracy of ARGAN outperforms the accuracy of the state-of-the-art generative adversarial networks-based defense methods. |
doi_str_mv | 10.1109/ACCESS.2022.3160283 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2022_3160283</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9737142</ieee_id><doaj_id>oai_doaj_org_article_8ab9dc9351d0496ca335ae7c651cb5d6</doaj_id><sourcerecordid>2645983811</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-cf1885f5bea586a02ee8c334afab3c969493eea99bf5259852ebbe90faaab1ee3</originalsourceid><addsrcrecordid>eNpNUctOwzAQjBBIIOgXcInEucWP2LW5RaUUpKpIPM7W2tmglFAXO-Xx97ikqrqXXc3uzKw0WXZJyYhSoq_LyWT6_DxihLERp5IwxY-yM0alHnLB5fHBfJoNYlySVCpBYnyWfZZPs3Jxk5fVF4YIoYG2_c2fvN3ELp_hCgN0zRce7vMFdt8-vMe89iG_RVwnZBMOF-UbNKskcMia_sDHusV4kZ3U0EYc7Pp59no3fZncD-ePs4dJOR-6gqhu6GqqlKiFRRBKAmGIynFeQA2WOy11oTkiaG1rwYRWgqG1qEkNAJYi8vPsodetPCzNOjQfEH6Nh8b8Az68GQhd41o0CqyunOaCVqTQ0gHnAnDspKDOikomrateax385wZjZ5Z-E1bpfcNkkdy5ojRd8f7KBR9jwHrvSonZRmX6qMw2KrOLKrEue1aDiHuGHvMxLRj_AwzjkWs</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2645983811</pqid></control><display><type>article</type><title>ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Choi, Seok-Hwan ; Shin, Jin-Myeong ; Liu, Peng ; Choi, Yoon-Ho</creator><creatorcontrib>Choi, Seok-Hwan ; Shin, Jin-Myeong ; Liu, Peng ; Choi, Yoon-Ho</creatorcontrib><description>An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples, generative adversarial networks-based defense methods have recently been studied. However, the performance of the state-of-the-art generative adversarial networks-based defense methods is limited because the target deep neural network models with generative adversarial networks-based defense methods are robust against adversarial examples but make a false decision for legitimate input data . To solve the accuracy degradation of the generative adversarial networks-based defense methods for legitimate input data , we propose a new generative adversarial networks-based defense method, which is called Adversarially Robust Generative Adversarial Networks(ARGAN). While converting input data to machine learning models using the two-step transformation architecture, ARGAN learns the generator model to reflect the vulnerability of the target deep neural network model against adversarial examples and optimizes parameter values of the generator model for a joint loss function. From the experimental results under various datasets collected from diverse applications, we show that the accuracy of ARGAN for legitimate input data is good-enough while keeping the target deep neural network model robust against adversarial examples . We also show that the accuracy of ARGAN outperforms the accuracy of the state-of-the-art generative adversarial networks-based defense methods.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2022.3160283</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Accuracy ; Adversarial examples ; adversarial perturbation ; Artificial intelligence ; Artificial neural networks ; Data models ; Deep learning ; deep neural networks (DNNs) ; Defense ; Generative adversarial networks ; Generators ; Machine learning ; Neural networks ; Noise reduction ; Perturbation ; Perturbation methods ; Robustness ; security ; Training</subject><ispartof>IEEE access, 2022, Vol.10, p.33602-33615</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-cf1885f5bea586a02ee8c334afab3c969493eea99bf5259852ebbe90faaab1ee3</citedby><cites>FETCH-LOGICAL-c408t-cf1885f5bea586a02ee8c334afab3c969493eea99bf5259852ebbe90faaab1ee3</cites><orcidid>0000-0003-3590-6024 ; 0000-0002-5091-8464 ; 0000-0002-3556-5082</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9737142$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Choi, Seok-Hwan</creatorcontrib><creatorcontrib>Shin, Jin-Myeong</creatorcontrib><creatorcontrib>Liu, Peng</creatorcontrib><creatorcontrib>Choi, Yoon-Ho</creatorcontrib><title>ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples</title><title>IEEE access</title><addtitle>Access</addtitle><description>An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples, generative adversarial networks-based defense methods have recently been studied. However, the performance of the state-of-the-art generative adversarial networks-based defense methods is limited because the target deep neural network models with generative adversarial networks-based defense methods are robust against adversarial examples but make a false decision for legitimate input data . To solve the accuracy degradation of the generative adversarial networks-based defense methods for legitimate input data , we propose a new generative adversarial networks-based defense method, which is called Adversarially Robust Generative Adversarial Networks(ARGAN). While converting input data to machine learning models using the two-step transformation architecture, ARGAN learns the generator model to reflect the vulnerability of the target deep neural network model against adversarial examples and optimizes parameter values of the generator model for a joint loss function. From the experimental results under various datasets collected from diverse applications, we show that the accuracy of ARGAN for legitimate input data is good-enough while keeping the target deep neural network model robust against adversarial examples . We also show that the accuracy of ARGAN outperforms the accuracy of the state-of-the-art generative adversarial networks-based defense methods.</description><subject>Accuracy</subject><subject>Adversarial examples</subject><subject>adversarial perturbation</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Data models</subject><subject>Deep learning</subject><subject>deep neural networks (DNNs)</subject><subject>Defense</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Noise reduction</subject><subject>Perturbation</subject><subject>Perturbation methods</subject><subject>Robustness</subject><subject>security</subject><subject>Training</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUctOwzAQjBBIIOgXcInEucWP2LW5RaUUpKpIPM7W2tmglFAXO-Xx97ikqrqXXc3uzKw0WXZJyYhSoq_LyWT6_DxihLERp5IwxY-yM0alHnLB5fHBfJoNYlySVCpBYnyWfZZPs3Jxk5fVF4YIoYG2_c2fvN3ELp_hCgN0zRce7vMFdt8-vMe89iG_RVwnZBMOF-UbNKskcMia_sDHusV4kZ3U0EYc7Pp59no3fZncD-ePs4dJOR-6gqhu6GqqlKiFRRBKAmGIynFeQA2WOy11oTkiaG1rwYRWgqG1qEkNAJYi8vPsodetPCzNOjQfEH6Nh8b8Az68GQhd41o0CqyunOaCVqTQ0gHnAnDspKDOikomrateax385wZjZ5Z-E1bpfcNkkdy5ojRd8f7KBR9jwHrvSonZRmX6qMw2KrOLKrEue1aDiHuGHvMxLRj_AwzjkWs</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Choi, Seok-Hwan</creator><creator>Shin, Jin-Myeong</creator><creator>Liu, Peng</creator><creator>Choi, Yoon-Ho</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3590-6024</orcidid><orcidid>https://orcid.org/0000-0002-5091-8464</orcidid><orcidid>https://orcid.org/0000-0002-3556-5082</orcidid></search><sort><creationdate>2022</creationdate><title>ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples</title><author>Choi, Seok-Hwan ; Shin, Jin-Myeong ; Liu, Peng ; Choi, Yoon-Ho</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-cf1885f5bea586a02ee8c334afab3c969493eea99bf5259852ebbe90faaab1ee3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Adversarial examples</topic><topic>adversarial perturbation</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Data models</topic><topic>Deep learning</topic><topic>deep neural networks (DNNs)</topic><topic>Defense</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Noise reduction</topic><topic>Perturbation</topic><topic>Perturbation methods</topic><topic>Robustness</topic><topic>security</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Choi, Seok-Hwan</creatorcontrib><creatorcontrib>Shin, Jin-Myeong</creatorcontrib><creatorcontrib>Liu, Peng</creatorcontrib><creatorcontrib>Choi, Yoon-Ho</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Choi, Seok-Hwan</au><au>Shin, Jin-Myeong</au><au>Liu, Peng</au><au>Choi, Yoon-Ho</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2022</date><risdate>2022</risdate><volume>10</volume><spage>33602</spage><epage>33615</epage><pages>33602-33615</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples, generative adversarial networks-based defense methods have recently been studied. However, the performance of the state-of-the-art generative adversarial networks-based defense methods is limited because the target deep neural network models with generative adversarial networks-based defense methods are robust against adversarial examples but make a false decision for legitimate input data . To solve the accuracy degradation of the generative adversarial networks-based defense methods for legitimate input data , we propose a new generative adversarial networks-based defense method, which is called Adversarially Robust Generative Adversarial Networks(ARGAN). While converting input data to machine learning models using the two-step transformation architecture, ARGAN learns the generator model to reflect the vulnerability of the target deep neural network model against adversarial examples and optimizes parameter values of the generator model for a joint loss function. From the experimental results under various datasets collected from diverse applications, we show that the accuracy of ARGAN for legitimate input data is good-enough while keeping the target deep neural network model robust against adversarial examples . We also show that the accuracy of ARGAN outperforms the accuracy of the state-of-the-art generative adversarial networks-based defense methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2022.3160283</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-3590-6024</orcidid><orcidid>https://orcid.org/0000-0002-5091-8464</orcidid><orcidid>https://orcid.org/0000-0002-3556-5082</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2022, Vol.10, p.33602-33615 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_crossref_primary_10_1109_ACCESS_2022_3160283 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Accuracy Adversarial examples adversarial perturbation Artificial intelligence Artificial neural networks Data models Deep learning deep neural networks (DNNs) Defense Generative adversarial networks Generators Machine learning Neural networks Noise reduction Perturbation Perturbation methods Robustness security Training |
title | ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T14%3A46%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ARGAN:%20Adversarially%20Robust%20Generative%20Adversarial%20Networks%20for%20Deep%20Neural%20Networks%20Against%20Adversarial%20Examples&rft.jtitle=IEEE%20access&rft.au=Choi,%20Seok-Hwan&rft.date=2022&rft.volume=10&rft.spage=33602&rft.epage=33615&rft.pages=33602-33615&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2022.3160283&rft_dat=%3Cproquest_cross%3E2645983811%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2645983811&rft_id=info:pmid/&rft_ieee_id=9737142&rft_doaj_id=oai_doaj_org_article_8ab9dc9351d0496ca335ae7c651cb5d6&rfr_iscdi=true |