How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

Machine learning models are vulnerable to Adversarial Examples: minor perturbations to input samples intended to deliberately cause misclassification. Current defenses against adversarial examples, especially for Deep Neural Networks (DNN), are primarily derived from empirical developments, and thei...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2019-01
Hauptverfasser: Grosse, Kathrin, Pfaff, David, Smith, Michael Thomas, Backes, Michael
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Grosse, Kathrin
Pfaff, David
Smith, Michael Thomas
Backes, Michael
description Machine learning models are vulnerable to Adversarial Examples: minor perturbations to input samples intended to deliberately cause misclassification. Current defenses against adversarial examples, especially for Deep Neural Networks (DNN), are primarily derived from empirical developments, and their security guarantees are often only justified retroactively. Many defenses therefore rely on hidden assumptions that are subsequently subverted by increasingly elaborate attacks. This is not surprising: deep learning notoriously lacks a comprehensive mathematical framework to provide meaningful guarantees. In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference. Across different models and datasets, we find deviating levels of uncertainty reflect the perturbation introduced to benign samples by state-of-the-art attacks, including novel white-box attacks on Gaussian Processes. Our experiments demonstrate that even unoptimized uncertainty thresholds already reject adversarial examples in many scenarios. Comment: Thresholds can be broken in a modified attack, which was done in arXiv:1812.02606 (The limitations of model uncertainty in adversarial settings).
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2071579825</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2071579825</sourcerecordid><originalsourceid>FETCH-proquest_journals_20715798253</originalsourceid><addsrcrecordid>eNqNjN1qwkAQRpdCQbG-w0CvA8mmMXpVSok_UEGo4qUMyagryWw6s_GHvnwr9AF69cE5h-_B9G2aJtH4xdqeGaqe4ji2o9xmWdo333N_ga14PsBbA4tXiOAzdNXN3UF1JlEUhzUUV2zamhSQKwhHcgKLpsUygGfYcEkS0HG4gWOYYafqkGElviRVWGJ5dEzwQSh8f176imp9Mo97rJWGfzswz9Ni_T6PWvFfHWnYnXwn_Kt2Ns6TLJ-MbZb-r_oBbSRNmA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2071579825</pqid></control><display><type>article</type><title>How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models</title><source>Free E- Journals</source><creator>Grosse, Kathrin ; Pfaff, David ; Smith, Michael Thomas ; Backes, Michael</creator><creatorcontrib>Grosse, Kathrin ; Pfaff, David ; Smith, Michael Thomas ; Backes, Michael</creatorcontrib><description>Machine learning models are vulnerable to Adversarial Examples: minor perturbations to input samples intended to deliberately cause misclassification. Current defenses against adversarial examples, especially for Deep Neural Networks (DNN), are primarily derived from empirical developments, and their security guarantees are often only justified retroactively. Many defenses therefore rely on hidden assumptions that are subsequently subverted by increasingly elaborate attacks. This is not surprising: deep learning notoriously lacks a comprehensive mathematical framework to provide meaningful guarantees. In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference. Across different models and datasets, we find deviating levels of uncertainty reflect the perturbation introduced to benign samples by state-of-the-art attacks, including novel white-box attacks on Gaussian Processes. Our experiments demonstrate that even unoptimized uncertainty thresholds already reject adversarial examples in many scenarios. Comment: Thresholds can be broken in a modified attack, which was done in arXiv:1812.02606 (The limitations of model uncertainty in adversarial settings).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Bayesian analysis ; Gaussian process ; Machine learning ; Mathematical models ; Neural networks ; State of the art ; Statistical inference ; Thresholds ; Uncertainty</subject><ispartof>arXiv.org, 2019-01</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Grosse, Kathrin</creatorcontrib><creatorcontrib>Pfaff, David</creatorcontrib><creatorcontrib>Smith, Michael Thomas</creatorcontrib><creatorcontrib>Backes, Michael</creatorcontrib><title>How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models</title><title>arXiv.org</title><description>Machine learning models are vulnerable to Adversarial Examples: minor perturbations to input samples intended to deliberately cause misclassification. Current defenses against adversarial examples, especially for Deep Neural Networks (DNN), are primarily derived from empirical developments, and their security guarantees are often only justified retroactively. Many defenses therefore rely on hidden assumptions that are subsequently subverted by increasingly elaborate attacks. This is not surprising: deep learning notoriously lacks a comprehensive mathematical framework to provide meaningful guarantees. In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference. Across different models and datasets, we find deviating levels of uncertainty reflect the perturbation introduced to benign samples by state-of-the-art attacks, including novel white-box attacks on Gaussian Processes. Our experiments demonstrate that even unoptimized uncertainty thresholds already reject adversarial examples in many scenarios. Comment: Thresholds can be broken in a modified attack, which was done in arXiv:1812.02606 (The limitations of model uncertainty in adversarial settings).</description><subject>Artificial intelligence</subject><subject>Bayesian analysis</subject><subject>Gaussian process</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Neural networks</subject><subject>State of the art</subject><subject>Statistical inference</subject><subject>Thresholds</subject><subject>Uncertainty</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjN1qwkAQRpdCQbG-w0CvA8mmMXpVSok_UEGo4qUMyagryWw6s_GHvnwr9AF69cE5h-_B9G2aJtH4xdqeGaqe4ji2o9xmWdo333N_ga14PsBbA4tXiOAzdNXN3UF1JlEUhzUUV2zamhSQKwhHcgKLpsUygGfYcEkS0HG4gWOYYafqkGElviRVWGJ5dEzwQSh8f176imp9Mo97rJWGfzswz9Ni_T6PWvFfHWnYnXwn_Kt2Ns6TLJ-MbZb-r_oBbSRNmA</recordid><startdate>20190103</startdate><enddate>20190103</enddate><creator>Grosse, Kathrin</creator><creator>Pfaff, David</creator><creator>Smith, Michael Thomas</creator><creator>Backes, Michael</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190103</creationdate><title>How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models</title><author>Grosse, Kathrin ; Pfaff, David ; Smith, Michael Thomas ; Backes, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20715798253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial intelligence</topic><topic>Bayesian analysis</topic><topic>Gaussian process</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Neural networks</topic><topic>State of the art</topic><topic>Statistical inference</topic><topic>Thresholds</topic><topic>Uncertainty</topic><toplevel>online_resources</toplevel><creatorcontrib>Grosse, Kathrin</creatorcontrib><creatorcontrib>Pfaff, David</creatorcontrib><creatorcontrib>Smith, Michael Thomas</creatorcontrib><creatorcontrib>Backes, Michael</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Grosse, Kathrin</au><au>Pfaff, David</au><au>Smith, Michael Thomas</au><au>Backes, Michael</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models</atitle><jtitle>arXiv.org</jtitle><date>2019-01-03</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Machine learning models are vulnerable to Adversarial Examples: minor perturbations to input samples intended to deliberately cause misclassification. Current defenses against adversarial examples, especially for Deep Neural Networks (DNN), are primarily derived from empirical developments, and their security guarantees are often only justified retroactively. Many defenses therefore rely on hidden assumptions that are subsequently subverted by increasingly elaborate attacks. This is not surprising: deep learning notoriously lacks a comprehensive mathematical framework to provide meaningful guarantees. In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference. Across different models and datasets, we find deviating levels of uncertainty reflect the perturbation introduced to benign samples by state-of-the-art attacks, including novel white-box attacks on Gaussian Processes. Our experiments demonstrate that even unoptimized uncertainty thresholds already reject adversarial examples in many scenarios. Comment: Thresholds can be broken in a modified attack, which was done in arXiv:1812.02606 (The limitations of model uncertainty in adversarial settings).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2071579825
source Free E- Journals
subjects Artificial intelligence
Bayesian analysis
Gaussian process
Machine learning
Mathematical models
Neural networks
State of the art
Statistical inference
Thresholds
Uncertainty
title How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T22%3A22%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=How%20Wrong%20Am%20I?%20-%20Studying%20Adversarial%20Examples%20and%20their%20Impact%20on%20Uncertainty%20in%20Gaussian%20Process%20Machine%20Learning%20Models&rft.jtitle=arXiv.org&rft.au=Grosse,%20Kathrin&rft.date=2019-01-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2071579825%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2071579825&rft_id=info:pmid/&rfr_iscdi=true