Adaptive perturbation adversarial training: based on reinforcement learning
Adversarial training has become the primary method to defend against adversarial samples. However, it is hard to practically apply due to many shortcomings. One of the shortcomings of adversarial training is that it will reduce the recognition accuracy of normal samples. Adaptive perturbation advers...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-08 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Nie, Zhishen Lin, Ying Ren, Sp Zhang, Lan |
description | Adversarial training has become the primary method to defend against adversarial samples. However, it is hard to practically apply due to many shortcomings. One of the shortcomings of adversarial training is that it will reduce the recognition accuracy of normal samples. Adaptive perturbation adversarial training is proposed to alleviate this problem. It uses marginal adversarial samples that are close to the decision boundary but does not cross the decision boundary for adversarial training, which improves the accuracy of model recognition while maintaining the robustness of the model. However, searching for marginal adversarial samples brings additional computational costs. This paper proposes a method for finding marginal adversarial samples based on reinforcement learning, and combines it with the latest fast adversarial training technology, which effectively speeds up training process and reduces training costs. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2567811700</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2567811700</sourcerecordid><originalsourceid>FETCH-proquest_journals_25678117003</originalsourceid><addsrcrecordid>eNqNyk0KwjAQQOEgCBbtHQKuC2lif3Anoghu3ZdpO5WUmsRJ0vNbwQO4eovvrVgilcqz-iDlhqXej0IIWVayKFTC7qceXNAzcocUIrUQtDUc-hnJA2mYeCDQRpvnkbfgsecLE2ozWOrwhSbwCYG-w46tB5g8pr9u2f56eZxvmSP7juhDM9pIZqFGFmVV53klhPrv-gCJ7T4I</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2567811700</pqid></control><display><type>article</type><title>Adaptive perturbation adversarial training: based on reinforcement learning</title><source>Free E- Journals</source><creator>Nie, Zhishen ; Lin, Ying ; Ren, Sp ; Zhang, Lan</creator><creatorcontrib>Nie, Zhishen ; Lin, Ying ; Ren, Sp ; Zhang, Lan</creatorcontrib><description>Adversarial training has become the primary method to defend against adversarial samples. However, it is hard to practically apply due to many shortcomings. One of the shortcomings of adversarial training is that it will reduce the recognition accuracy of normal samples. Adaptive perturbation adversarial training is proposed to alleviate this problem. It uses marginal adversarial samples that are close to the decision boundary but does not cross the decision boundary for adversarial training, which improves the accuracy of model recognition while maintaining the robustness of the model. However, searching for marginal adversarial samples brings additional computational costs. This paper proposes a method for finding marginal adversarial samples based on reinforcement learning, and combines it with the latest fast adversarial training technology, which effectively speeds up training process and reduces training costs.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive sampling ; Computing costs ; Learning ; Model accuracy ; Perturbation ; Recognition ; Training</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Nie, Zhishen</creatorcontrib><creatorcontrib>Lin, Ying</creatorcontrib><creatorcontrib>Ren, Sp</creatorcontrib><creatorcontrib>Zhang, Lan</creatorcontrib><title>Adaptive perturbation adversarial training: based on reinforcement learning</title><title>arXiv.org</title><description>Adversarial training has become the primary method to defend against adversarial samples. However, it is hard to practically apply due to many shortcomings. One of the shortcomings of adversarial training is that it will reduce the recognition accuracy of normal samples. Adaptive perturbation adversarial training is proposed to alleviate this problem. It uses marginal adversarial samples that are close to the decision boundary but does not cross the decision boundary for adversarial training, which improves the accuracy of model recognition while maintaining the robustness of the model. However, searching for marginal adversarial samples brings additional computational costs. This paper proposes a method for finding marginal adversarial samples based on reinforcement learning, and combines it with the latest fast adversarial training technology, which effectively speeds up training process and reduces training costs.</description><subject>Adaptive sampling</subject><subject>Computing costs</subject><subject>Learning</subject><subject>Model accuracy</subject><subject>Perturbation</subject><subject>Recognition</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyk0KwjAQQOEgCBbtHQKuC2lif3Anoghu3ZdpO5WUmsRJ0vNbwQO4eovvrVgilcqz-iDlhqXej0IIWVayKFTC7qceXNAzcocUIrUQtDUc-hnJA2mYeCDQRpvnkbfgsecLE2ozWOrwhSbwCYG-w46tB5g8pr9u2f56eZxvmSP7juhDM9pIZqFGFmVV53klhPrv-gCJ7T4I</recordid><startdate>20210830</startdate><enddate>20210830</enddate><creator>Nie, Zhishen</creator><creator>Lin, Ying</creator><creator>Ren, Sp</creator><creator>Zhang, Lan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210830</creationdate><title>Adaptive perturbation adversarial training: based on reinforcement learning</title><author>Nie, Zhishen ; Lin, Ying ; Ren, Sp ; Zhang, Lan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25678117003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptive sampling</topic><topic>Computing costs</topic><topic>Learning</topic><topic>Model accuracy</topic><topic>Perturbation</topic><topic>Recognition</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Nie, Zhishen</creatorcontrib><creatorcontrib>Lin, Ying</creatorcontrib><creatorcontrib>Ren, Sp</creatorcontrib><creatorcontrib>Zhang, Lan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nie, Zhishen</au><au>Lin, Ying</au><au>Ren, Sp</au><au>Zhang, Lan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Adaptive perturbation adversarial training: based on reinforcement learning</atitle><jtitle>arXiv.org</jtitle><date>2021-08-30</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Adversarial training has become the primary method to defend against adversarial samples. However, it is hard to practically apply due to many shortcomings. One of the shortcomings of adversarial training is that it will reduce the recognition accuracy of normal samples. Adaptive perturbation adversarial training is proposed to alleviate this problem. It uses marginal adversarial samples that are close to the decision boundary but does not cross the decision boundary for adversarial training, which improves the accuracy of model recognition while maintaining the robustness of the model. However, searching for marginal adversarial samples brings additional computational costs. This paper proposes a method for finding marginal adversarial samples based on reinforcement learning, and combines it with the latest fast adversarial training technology, which effectively speeds up training process and reduces training costs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2567811700 |
source | Free E- Journals |
subjects | Adaptive sampling Computing costs Learning Model accuracy Perturbation Recognition Training |
title | Adaptive perturbation adversarial training: based on reinforcement learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T10%3A44%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Adaptive%20perturbation%20adversarial%20training:%20based%20on%20reinforcement%20learning&rft.jtitle=arXiv.org&rft.au=Nie,%20Zhishen&rft.date=2021-08-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2567811700%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2567811700&rft_id=info:pmid/&rfr_iscdi=true |