Robust Deep Learning Ensemble Against Deception

Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on dependable and secure computing 2021-07, Vol.18 (4), p.1513-1527
Hauptverfasser: Wei, Wenqi, Liu, Ling
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1527
container_issue 4
container_start_page 1513
container_title IEEE transactions on dependable and secure computing
container_volume 18
creator Wei, Wenqi
Liu, Ling
description Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open challenge. This article presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution inputs. XEnsemble by design has three unique capabilities. First, XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. Second, XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. Third, XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs. Evaluated using 11 popular adversarial attacks and two representative out-of-distribution datasets, we show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.
doi_str_mv 10.1109/TDSC.2020.3024660
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2549757059</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9200713</ieee_id><sourcerecordid>2549757059</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-febd5d5d4a58704d4d58f46bffcdcbd72e74a49c86bab965067844e6735bccc33</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsFZ_gHgJeE46m-xH9lj6oUJB0HpedjeTktImcTc9-O_d2CJzmIF53hl4CHmkkFEKarZdfi6yHHLICsiZEHBFJlQxmgLQ8jrOnPGUK0lvyV0Ie4hQqdiEzD46ewpDskTskw0a3zbtLlm1AY_2gMl8Z5r2b-2wH5quvSc3tTkEfLj0Kflar7aL13Tz_vK2mG9SVxRiSGu0FY_FDC8lsIpVvKyZsHXtKmcrmaNkhilXCmusEhyELBlDIQtunYs3puT5fLf33fcJw6D33cm38aXOOVOSS-AqUvRMOd-F4LHWvW-Oxv9oCnr0okcvevSiL15i5umcaRDxn1c5gKRF8Qv9sV22</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2549757059</pqid></control><display><type>article</type><title>Robust Deep Learning Ensemble Against Deception</title><source>IEEE/IET Electronic Library</source><creator>Wei, Wenqi ; Liu, Ling</creator><creatorcontrib>Wei, Wenqi ; Liu, Ling</creatorcontrib><description>Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open challenge. This article presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution inputs. XEnsemble by design has three unique capabilities. First, XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. Second, XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. Third, XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs. Evaluated using 11 popular adversarial attacks and two representative out-of-distribution datasets, we show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.</description><identifier>ISSN: 1545-5971</identifier><identifier>EISSN: 1941-0018</identifier><identifier>DOI: 10.1109/TDSC.2020.3024660</identifier><identifier>CODEN: ITDSCM</identifier><language>eng</language><publisher>Washington: IEEE</publisher><subject>adversarial attack and defense ; Algorithms ; Artificial neural networks ; Data models ; Deception ; Deep learning ; ensemble method ; Machine learning ; Neural networks ; Prediction algorithms ; Prediction models ; Predictive models ; Robust deep learning ; Robustness ; Training ; Verification</subject><ispartof>IEEE transactions on dependable and secure computing, 2021-07, Vol.18 (4), p.1513-1527</ispartof><rights>Copyright IEEE Computer Society 2021</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-febd5d5d4a58704d4d58f46bffcdcbd72e74a49c86bab965067844e6735bccc33</citedby><cites>FETCH-LOGICAL-c336t-febd5d5d4a58704d4d58f46bffcdcbd72e74a49c86bab965067844e6735bccc33</cites><orcidid>0000-0002-4138-3082 ; 0000-0001-9177-114X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9200713$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9200713$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wei, Wenqi</creatorcontrib><creatorcontrib>Liu, Ling</creatorcontrib><title>Robust Deep Learning Ensemble Against Deception</title><title>IEEE transactions on dependable and secure computing</title><addtitle>TDSC</addtitle><description>Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open challenge. This article presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution inputs. XEnsemble by design has three unique capabilities. First, XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. Second, XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. Third, XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs. Evaluated using 11 popular adversarial attacks and two representative out-of-distribution datasets, we show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.</description><subject>adversarial attack and defense</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Data models</subject><subject>Deception</subject><subject>Deep learning</subject><subject>ensemble method</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Prediction algorithms</subject><subject>Prediction models</subject><subject>Predictive models</subject><subject>Robust deep learning</subject><subject>Robustness</subject><subject>Training</subject><subject>Verification</subject><issn>1545-5971</issn><issn>1941-0018</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1Lw0AQhhdRsFZ_gHgJeE46m-xH9lj6oUJB0HpedjeTktImcTc9-O_d2CJzmIF53hl4CHmkkFEKarZdfi6yHHLICsiZEHBFJlQxmgLQ8jrOnPGUK0lvyV0Ie4hQqdiEzD46ewpDskTskw0a3zbtLlm1AY_2gMl8Z5r2b-2wH5quvSc3tTkEfLj0Kflar7aL13Tz_vK2mG9SVxRiSGu0FY_FDC8lsIpVvKyZsHXtKmcrmaNkhilXCmusEhyELBlDIQtunYs3puT5fLf33fcJw6D33cm38aXOOVOSS-AqUvRMOd-F4LHWvW-Oxv9oCnr0okcvevSiL15i5umcaRDxn1c5gKRF8Qv9sV22</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Wei, Wenqi</creator><creator>Liu, Ling</creator><general>IEEE</general><general>IEEE Computer Society</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><orcidid>https://orcid.org/0000-0002-4138-3082</orcidid><orcidid>https://orcid.org/0000-0001-9177-114X</orcidid></search><sort><creationdate>20210701</creationdate><title>Robust Deep Learning Ensemble Against Deception</title><author>Wei, Wenqi ; Liu, Ling</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-febd5d5d4a58704d4d58f46bffcdcbd72e74a49c86bab965067844e6735bccc33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>adversarial attack and defense</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Data models</topic><topic>Deception</topic><topic>Deep learning</topic><topic>ensemble method</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Prediction algorithms</topic><topic>Prediction models</topic><topic>Predictive models</topic><topic>Robust deep learning</topic><topic>Robustness</topic><topic>Training</topic><topic>Verification</topic><toplevel>online_resources</toplevel><creatorcontrib>Wei, Wenqi</creatorcontrib><creatorcontrib>Liu, Ling</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library</collection><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>IEEE transactions on dependable and secure computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wei, Wenqi</au><au>Liu, Ling</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Deep Learning Ensemble Against Deception</atitle><jtitle>IEEE transactions on dependable and secure computing</jtitle><stitle>TDSC</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>18</volume><issue>4</issue><spage>1513</spage><epage>1527</epage><pages>1513-1527</pages><issn>1545-5971</issn><eissn>1941-0018</eissn><coden>ITDSCM</coden><abstract>Deep neural network (DNN) models are known to be vulnerable to maliciously crafted adversarial examples and to out-of-distribution inputs drawn sufficiently far away from the training data. How to protect a machine learning model against deception of both types of destructive inputs remains an open challenge. This article presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution inputs. XEnsemble by design has three unique capabilities. First, XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. Second, XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. Third, XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs. Evaluated using 11 popular adversarial attacks and two representative out-of-distribution datasets, we show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.</abstract><cop>Washington</cop><pub>IEEE</pub><doi>10.1109/TDSC.2020.3024660</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-4138-3082</orcidid><orcidid>https://orcid.org/0000-0001-9177-114X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1545-5971
ispartof IEEE transactions on dependable and secure computing, 2021-07, Vol.18 (4), p.1513-1527
issn 1545-5971
1941-0018
language eng
recordid cdi_proquest_journals_2549757059
source IEEE/IET Electronic Library
subjects adversarial attack and defense
Algorithms
Artificial neural networks
Data models
Deception
Deep learning
ensemble method
Machine learning
Neural networks
Prediction algorithms
Prediction models
Predictive models
Robust deep learning
Robustness
Training
Verification
title Robust Deep Learning Ensemble Against Deception
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T07%3A45%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Deep%20Learning%20Ensemble%20Against%20Deception&rft.jtitle=IEEE%20transactions%20on%20dependable%20and%20secure%20computing&rft.au=Wei,%20Wenqi&rft.date=2021-07-01&rft.volume=18&rft.issue=4&rft.spage=1513&rft.epage=1527&rft.pages=1513-1527&rft.issn=1545-5971&rft.eissn=1941-0018&rft.coden=ITDSCM&rft_id=info:doi/10.1109/TDSC.2020.3024660&rft_dat=%3Cproquest_RIE%3E2549757059%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2549757059&rft_id=info:pmid/&rft_ieee_id=9200713&rfr_iscdi=true