Non-Determinism in Neural Networks for Adversarial Robustness

Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas. In most instances where these tasks are deployed in real-world scenarios, the models used in them have been show...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Khan, Daanish Ali, Li, Linhong, Sha, Ninghao, Liu, Zhuoran, Jimenez, Abelino, Raj, Bhiksha, Singh, Rita
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Khan, Daanish Ali
Li, Linhong
Sha, Ninghao
Liu, Zhuoran
Jimenez, Abelino
Raj, Bhiksha
Singh, Rita
description Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas. In most instances where these tasks are deployed in real-world scenarios, the models used in them have been shown to be susceptible to adversarial attacks, making it imperative for us to address the challenge of their adversarial robustness. Existing techniques for adversarial robustness fall into three broad categories: defensive distillation techniques, adversarial training techniques, and randomized or non-deterministic model based techniques. In this paper, we propose a novel neural network paradigm that falls under the category of randomized models for adversarial robustness, but differs from all existing techniques under this category in that it models each parameter of the network as a statistical distribution with learnable parameters. We show experimentally that this framework is highly robust to a variety of white-box and black-box adversarial attacks, while preserving the task-specific performance of the traditional neural network model.
doi_str_mv 10.48550/arxiv.1905.10906
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_10906</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_10906</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-129fafb94189ea4b3a210ee04022e079e6d1b82fb5a56f7a8a63ce12d8f2a93c3</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwIr8QMLYjh17waIqT6kqEuo-GjdjyaJJ0Dh98PeU0tVZXOnqHCHuJFS1MwYekI9pX0kPppLgwV6Lx9U4lE80EfdpSLkv0lCsaMe4PWE6jPyVizhyMe_2xBk5nYbPMezyNFDON-Iq4jbT7YUzsX55Xi_eyuXH6_tivizRNraUykeMwdfSecI6aFQSiKAGpQgaT7aTwakYDBobG3Ro9Yak6lxU6PVGz8T9_-3Zv_3m1CP_tH8d7blD_wJSGUM7</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Non-Determinism in Neural Networks for Adversarial Robustness</title><source>arXiv.org</source><creator>Khan, Daanish Ali ; Li, Linhong ; Sha, Ninghao ; Liu, Zhuoran ; Jimenez, Abelino ; Raj, Bhiksha ; Singh, Rita</creator><creatorcontrib>Khan, Daanish Ali ; Li, Linhong ; Sha, Ninghao ; Liu, Zhuoran ; Jimenez, Abelino ; Raj, Bhiksha ; Singh, Rita</creatorcontrib><description>Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas. In most instances where these tasks are deployed in real-world scenarios, the models used in them have been shown to be susceptible to adversarial attacks, making it imperative for us to address the challenge of their adversarial robustness. Existing techniques for adversarial robustness fall into three broad categories: defensive distillation techniques, adversarial training techniques, and randomized or non-deterministic model based techniques. In this paper, we propose a novel neural network paradigm that falls under the category of randomized models for adversarial robustness, but differs from all existing techniques under this category in that it models each parameter of the network as a statistical distribution with learnable parameters. We show experimentally that this framework is highly robust to a variety of white-box and black-box adversarial attacks, while preserving the task-specific performance of the traditional neural network model.</description><identifier>DOI: 10.48550/arxiv.1905.10906</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.10906$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.10906$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Khan, Daanish Ali</creatorcontrib><creatorcontrib>Li, Linhong</creatorcontrib><creatorcontrib>Sha, Ninghao</creatorcontrib><creatorcontrib>Liu, Zhuoran</creatorcontrib><creatorcontrib>Jimenez, Abelino</creatorcontrib><creatorcontrib>Raj, Bhiksha</creatorcontrib><creatorcontrib>Singh, Rita</creatorcontrib><title>Non-Determinism in Neural Networks for Adversarial Robustness</title><description>Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas. In most instances where these tasks are deployed in real-world scenarios, the models used in them have been shown to be susceptible to adversarial attacks, making it imperative for us to address the challenge of their adversarial robustness. Existing techniques for adversarial robustness fall into three broad categories: defensive distillation techniques, adversarial training techniques, and randomized or non-deterministic model based techniques. In this paper, we propose a novel neural network paradigm that falls under the category of randomized models for adversarial robustness, but differs from all existing techniques under this category in that it models each parameter of the network as a statistical distribution with learnable parameters. We show experimentally that this framework is highly robust to a variety of white-box and black-box adversarial attacks, while preserving the task-specific performance of the traditional neural network model.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwIr8QMLYjh17waIqT6kqEuo-GjdjyaJJ0Dh98PeU0tVZXOnqHCHuJFS1MwYekI9pX0kPppLgwV6Lx9U4lE80EfdpSLkv0lCsaMe4PWE6jPyVizhyMe_2xBk5nYbPMezyNFDON-Iq4jbT7YUzsX55Xi_eyuXH6_tivizRNraUykeMwdfSecI6aFQSiKAGpQgaT7aTwakYDBobG3Ro9Yak6lxU6PVGz8T9_-3Zv_3m1CP_tH8d7blD_wJSGUM7</recordid><startdate>20190526</startdate><enddate>20190526</enddate><creator>Khan, Daanish Ali</creator><creator>Li, Linhong</creator><creator>Sha, Ninghao</creator><creator>Liu, Zhuoran</creator><creator>Jimenez, Abelino</creator><creator>Raj, Bhiksha</creator><creator>Singh, Rita</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190526</creationdate><title>Non-Determinism in Neural Networks for Adversarial Robustness</title><author>Khan, Daanish Ali ; Li, Linhong ; Sha, Ninghao ; Liu, Zhuoran ; Jimenez, Abelino ; Raj, Bhiksha ; Singh, Rita</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-129fafb94189ea4b3a210ee04022e079e6d1b82fb5a56f7a8a63ce12d8f2a93c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Khan, Daanish Ali</creatorcontrib><creatorcontrib>Li, Linhong</creatorcontrib><creatorcontrib>Sha, Ninghao</creatorcontrib><creatorcontrib>Liu, Zhuoran</creatorcontrib><creatorcontrib>Jimenez, Abelino</creatorcontrib><creatorcontrib>Raj, Bhiksha</creatorcontrib><creatorcontrib>Singh, Rita</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Khan, Daanish Ali</au><au>Li, Linhong</au><au>Sha, Ninghao</au><au>Liu, Zhuoran</au><au>Jimenez, Abelino</au><au>Raj, Bhiksha</au><au>Singh, Rita</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Non-Determinism in Neural Networks for Adversarial Robustness</atitle><date>2019-05-26</date><risdate>2019</risdate><abstract>Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas. In most instances where these tasks are deployed in real-world scenarios, the models used in them have been shown to be susceptible to adversarial attacks, making it imperative for us to address the challenge of their adversarial robustness. Existing techniques for adversarial robustness fall into three broad categories: defensive distillation techniques, adversarial training techniques, and randomized or non-deterministic model based techniques. In this paper, we propose a novel neural network paradigm that falls under the category of randomized models for adversarial robustness, but differs from all existing techniques under this category in that it models each parameter of the network as a statistical distribution with learnable parameters. We show experimentally that this framework is highly robust to a variety of white-box and black-box adversarial attacks, while preserving the task-specific performance of the traditional neural network model.</abstract><doi>10.48550/arxiv.1905.10906</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1905.10906
ispartof
issn
language eng
recordid cdi_arxiv_primary_1905_10906
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
Statistics - Machine Learning
title Non-Determinism in Neural Networks for Adversarial Robustness
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T07%3A29%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Non-Determinism%20in%20Neural%20Networks%20for%20Adversarial%20Robustness&rft.au=Khan,%20Daanish%20Ali&rft.date=2019-05-26&rft_id=info:doi/10.48550/arxiv.1905.10906&rft_dat=%3Carxiv_GOX%3E1905_10906%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true