PuVAE: A Variational Autoencoder to Purify Adversarial Examples

Deep neural networks are widely used and exhibit excellent performance in many areas. However, they are vulnerable to adversarial attacks that compromise networks at inference time by applying elaborately designed perturbations to input data. Although several defense methods have been proposed to ad...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.126582-126593
Hauptverfasser: Hwang, Uiwon, Park, Jaewoo, Jang, Hyemi, Yoon, Sungroh, Cho, Nam Ik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 126593
container_issue
container_start_page 126582
container_title IEEE access
container_volume 7
creator Hwang, Uiwon
Park, Jaewoo
Jang, Hyemi
Yoon, Sungroh
Cho, Nam Ik
description Deep neural networks are widely used and exhibit excellent performance in many areas. However, they are vulnerable to adversarial attacks that compromise networks at inference time by applying elaborately designed perturbations to input data. Although several defense methods have been proposed to address specific attacks, other types of attacks can circumvent these defense mechanisms. Therefore, we propose Purifying Variational AutoEncoder (PuVAE), a method to purify adversarial examples. The proposed method eliminates an adversarial perturbation by projecting an adversarial example on the manifold of each class and determining the closest projection as a purified sample. We experimentally illustrate the robustness of PuVAE against various attack methods without any prior knowledge about the attacks. In our experiments, the proposed method exhibits performances that are competitive with state-of-the-art defense methods, and the inference time is approximately 130 times faster than that of Defense-GAN which is a state-of-the art purifier method.
doi_str_mv 10.1109/ACCESS.2019.2939352
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2455640992</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8824108</ieee_id><doaj_id>oai_doaj_org_article_d51810be1a86442493757c5ed18a0e35</doaj_id><sourcerecordid>2455640992</sourcerecordid><originalsourceid>FETCH-LOGICAL-c474t-bf6694b4d843ec584cabe538931267846790a89573df55d6a585183e649fa0873</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhhdRsNT-gl4CnlP3O7teJISqhYKFaq_LJplIStqtu4nYf29qSnEuMwzvOx8PQlOCZ4Rg_ZBm2Xy9nlFM9IxqppmgV2hEidQxE0xe_6tv0SSELe5D9S2RjNDTqtuk88cojTbW17at3d42Udq1DvaFK8FHrYtWna-rY5SW3-DDSdZE8x-7OzQQ7tBNZZsAk3Meo4_n-Xv2Gi_fXhZZuowLnvA2zispNc95qTiDQihe2BwEU5oRKhPFZaKxVf1FrKyEKKUVShDFQHJdWawSNkaLYW7p7NYcfL2z_micrc1fw_lPY31bFw2YsncSnAOxSnJOuWaJSAoBJVEWQ49hjO6HWQfvvjoIrdm6zvd_B0O5EJJjrWmvYoOq8C4ED9VlK8HmBN4M4M0JvDmD713TwVUDwMWhFOUEK_YLTMF7fw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455640992</pqid></control><display><type>article</type><title>PuVAE: A Variational Autoencoder to Purify Adversarial Examples</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Hwang, Uiwon ; Park, Jaewoo ; Jang, Hyemi ; Yoon, Sungroh ; Cho, Nam Ik</creator><creatorcontrib>Hwang, Uiwon ; Park, Jaewoo ; Jang, Hyemi ; Yoon, Sungroh ; Cho, Nam Ik</creatorcontrib><description>Deep neural networks are widely used and exhibit excellent performance in many areas. However, they are vulnerable to adversarial attacks that compromise networks at inference time by applying elaborately designed perturbations to input data. Although several defense methods have been proposed to address specific attacks, other types of attacks can circumvent these defense mechanisms. Therefore, we propose Purifying Variational AutoEncoder (PuVAE), a method to purify adversarial examples. The proposed method eliminates an adversarial perturbation by projecting an adversarial example on the manifold of each class and determining the closest projection as a purified sample. We experimentally illustrate the robustness of PuVAE against various attack methods without any prior knowledge about the attacks. In our experiments, the proposed method exhibits performances that are competitive with state-of-the-art defense methods, and the inference time is approximately 130 times faster than that of Defense-GAN which is a state-of-the art purifier method.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2939352</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adversarial attack ; Artificial neural networks ; Biological neural networks ; deep learning ; Gallium nitride ; Inference ; Law enforcement ; Linear programming ; Perturbation ; Perturbation methods ; Training ; Training data ; variational autoencoder</subject><ispartof>IEEE access, 2019, Vol.7, p.126582-126593</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c474t-bf6694b4d843ec584cabe538931267846790a89573df55d6a585183e649fa0873</citedby><cites>FETCH-LOGICAL-c474t-bf6694b4d843ec584cabe538931267846790a89573df55d6a585183e649fa0873</cites><orcidid>0000-0002-2367-197X ; 0000-0001-5297-4649</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8824108$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Hwang, Uiwon</creatorcontrib><creatorcontrib>Park, Jaewoo</creatorcontrib><creatorcontrib>Jang, Hyemi</creatorcontrib><creatorcontrib>Yoon, Sungroh</creatorcontrib><creatorcontrib>Cho, Nam Ik</creatorcontrib><title>PuVAE: A Variational Autoencoder to Purify Adversarial Examples</title><title>IEEE access</title><addtitle>Access</addtitle><description>Deep neural networks are widely used and exhibit excellent performance in many areas. However, they are vulnerable to adversarial attacks that compromise networks at inference time by applying elaborately designed perturbations to input data. Although several defense methods have been proposed to address specific attacks, other types of attacks can circumvent these defense mechanisms. Therefore, we propose Purifying Variational AutoEncoder (PuVAE), a method to purify adversarial examples. The proposed method eliminates an adversarial perturbation by projecting an adversarial example on the manifold of each class and determining the closest projection as a purified sample. We experimentally illustrate the robustness of PuVAE against various attack methods without any prior knowledge about the attacks. In our experiments, the proposed method exhibits performances that are competitive with state-of-the-art defense methods, and the inference time is approximately 130 times faster than that of Defense-GAN which is a state-of-the art purifier method.</description><subject>Adversarial attack</subject><subject>Artificial neural networks</subject><subject>Biological neural networks</subject><subject>deep learning</subject><subject>Gallium nitride</subject><subject>Inference</subject><subject>Law enforcement</subject><subject>Linear programming</subject><subject>Perturbation</subject><subject>Perturbation methods</subject><subject>Training</subject><subject>Training data</subject><subject>variational autoencoder</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkE1Lw0AQhhdRsNT-gl4CnlP3O7teJISqhYKFaq_LJplIStqtu4nYf29qSnEuMwzvOx8PQlOCZ4Rg_ZBm2Xy9nlFM9IxqppmgV2hEidQxE0xe_6tv0SSELe5D9S2RjNDTqtuk88cojTbW17at3d42Udq1DvaFK8FHrYtWna-rY5SW3-DDSdZE8x-7OzQQ7tBNZZsAk3Meo4_n-Xv2Gi_fXhZZuowLnvA2zispNc95qTiDQihe2BwEU5oRKhPFZaKxVf1FrKyEKKUVShDFQHJdWawSNkaLYW7p7NYcfL2z_micrc1fw_lPY31bFw2YsncSnAOxSnJOuWaJSAoBJVEWQ49hjO6HWQfvvjoIrdm6zvd_B0O5EJJjrWmvYoOq8C4ED9VlK8HmBN4M4M0JvDmD713TwVUDwMWhFOUEK_YLTMF7fw</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Hwang, Uiwon</creator><creator>Park, Jaewoo</creator><creator>Jang, Hyemi</creator><creator>Yoon, Sungroh</creator><creator>Cho, Nam Ik</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-2367-197X</orcidid><orcidid>https://orcid.org/0000-0001-5297-4649</orcidid></search><sort><creationdate>2019</creationdate><title>PuVAE: A Variational Autoencoder to Purify Adversarial Examples</title><author>Hwang, Uiwon ; Park, Jaewoo ; Jang, Hyemi ; Yoon, Sungroh ; Cho, Nam Ik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c474t-bf6694b4d843ec584cabe538931267846790a89573df55d6a585183e649fa0873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Adversarial attack</topic><topic>Artificial neural networks</topic><topic>Biological neural networks</topic><topic>deep learning</topic><topic>Gallium nitride</topic><topic>Inference</topic><topic>Law enforcement</topic><topic>Linear programming</topic><topic>Perturbation</topic><topic>Perturbation methods</topic><topic>Training</topic><topic>Training data</topic><topic>variational autoencoder</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hwang, Uiwon</creatorcontrib><creatorcontrib>Park, Jaewoo</creatorcontrib><creatorcontrib>Jang, Hyemi</creatorcontrib><creatorcontrib>Yoon, Sungroh</creatorcontrib><creatorcontrib>Cho, Nam Ik</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hwang, Uiwon</au><au>Park, Jaewoo</au><au>Jang, Hyemi</au><au>Yoon, Sungroh</au><au>Cho, Nam Ik</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PuVAE: A Variational Autoencoder to Purify Adversarial Examples</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>126582</spage><epage>126593</epage><pages>126582-126593</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Deep neural networks are widely used and exhibit excellent performance in many areas. However, they are vulnerable to adversarial attacks that compromise networks at inference time by applying elaborately designed perturbations to input data. Although several defense methods have been proposed to address specific attacks, other types of attacks can circumvent these defense mechanisms. Therefore, we propose Purifying Variational AutoEncoder (PuVAE), a method to purify adversarial examples. The proposed method eliminates an adversarial perturbation by projecting an adversarial example on the manifold of each class and determining the closest projection as a purified sample. We experimentally illustrate the robustness of PuVAE against various attack methods without any prior knowledge about the attacks. In our experiments, the proposed method exhibits performances that are competitive with state-of-the-art defense methods, and the inference time is approximately 130 times faster than that of Defense-GAN which is a state-of-the art purifier method.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2939352</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-2367-197X</orcidid><orcidid>https://orcid.org/0000-0001-5297-4649</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2019, Vol.7, p.126582-126593
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2455640992
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Adversarial attack
Artificial neural networks
Biological neural networks
deep learning
Gallium nitride
Inference
Law enforcement
Linear programming
Perturbation
Perturbation methods
Training
Training data
variational autoencoder
title PuVAE: A Variational Autoencoder to Purify Adversarial Examples
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T00%3A26%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PuVAE:%20A%20Variational%20Autoencoder%20to%20Purify%20Adversarial%20Examples&rft.jtitle=IEEE%20access&rft.au=Hwang,%20Uiwon&rft.date=2019&rft.volume=7&rft.spage=126582&rft.epage=126593&rft.pages=126582-126593&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2939352&rft_dat=%3Cproquest_cross%3E2455640992%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455640992&rft_id=info:pmid/&rft_ieee_id=8824108&rft_doaj_id=oai_doaj_org_article_d51810be1a86442493757c5ed18a0e35&rfr_iscdi=true