Bringing Engineering Rigor to Deep Learning

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including autonomous driving, robotics, and malware detection, where the correctness and predictability of a system on corner-case inputs are of great importance. Unfortunately, the common practice to valid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Operating systems review 2019-07, Vol.53 (1), p.59-67
Hauptverfasser: Pei, Kexin, Wang, Shiqi, Tian, Yuchi, Whitehouse, Justin, Vondrick, Carl, Cao, Yinzhi, Ray, Baishakhi, Jana, Suman, Yang, Junfeng
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 67
container_issue 1
container_start_page 59
container_title Operating systems review
container_volume 53
creator Pei, Kexin
Wang, Shiqi
Tian, Yuchi
Whitehouse, Justin
Vondrick, Carl
Cao, Yinzhi
Ray, Baishakhi
Jana, Suman
Yang, Junfeng
description Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including autonomous driving, robotics, and malware detection, where the correctness and predictability of a system on corner-case inputs are of great importance. Unfortunately, the common practice to validating a deep neural network (DNN) - measuring overall accuracy on a randomly selected test set - is not designed to surface corner-case errors. As recent work shows, even DNNs with state-of-the-art accuracy are easily fooled by human-imperceptible, adversarial perturbations to the inputs. Questions such as how to test corner-case behaviors more thoroughly and whether all adversarial samples have been found remain unanswered. In the last few years, we have been working on bringing more engineering rigor into deep learning. Towards this goal, we have built five systems to test DNNs more thoroughly and verify the absence of adversarial samples for given datasets. These systems check a broad spectrum of properties (e.g., rotating an image should never change its classification) and find thousands of error-inducing samples for popular DNNs in critical domains (e.g., ImageNet, autonomous driving, and malware detection). Our DNN verifiers are also orders of magnitude (e.g., 5,000×) faster than similar tools. This article overviews our systems and discusses three open research challenges to hopefully inspire more future research towards testing and verifying DNNs.
doi_str_mv 10.1145/3352020.3352030
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3352020_3352030</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1145_3352020_3352030</sourcerecordid><originalsourceid>FETCH-LOGICAL-c156t-850dcf9abad23b6ac970ef69a41b41697f9e1e5cf4b9b698e2fdc0b1fb6ca4d53</originalsourceid><addsrcrecordid>eNotj0trAjEUhbOoUGu77jZ7Gb138nCybK2PwoAg7TokmRuZUh1J3PTf19GBAx_nLA58jL0izBClmguhSihhdqOABzYG1KJQpoJH9pTzDwBWqHHMpu-pPR2u4aseRH3l-_bQJX7p-AfRmdfk0uk6P7NRdL-ZXgZO2Pd69bXcFvVu87l8q4uASl-KSkETonHeNaXw2gWzAIraOIleojaLaAhJhSi98dpUVMYmgMfodXCyUWLC5vffkLqcE0V7Tu3RpT-LYHs_O_jZwU_8A_qORG0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Bringing Engineering Rigor to Deep Learning</title><source>ACM Digital Library Complete</source><creator>Pei, Kexin ; Wang, Shiqi ; Tian, Yuchi ; Whitehouse, Justin ; Vondrick, Carl ; Cao, Yinzhi ; Ray, Baishakhi ; Jana, Suman ; Yang, Junfeng</creator><creatorcontrib>Pei, Kexin ; Wang, Shiqi ; Tian, Yuchi ; Whitehouse, Justin ; Vondrick, Carl ; Cao, Yinzhi ; Ray, Baishakhi ; Jana, Suman ; Yang, Junfeng</creatorcontrib><description>Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including autonomous driving, robotics, and malware detection, where the correctness and predictability of a system on corner-case inputs are of great importance. Unfortunately, the common practice to validating a deep neural network (DNN) - measuring overall accuracy on a randomly selected test set - is not designed to surface corner-case errors. As recent work shows, even DNNs with state-of-the-art accuracy are easily fooled by human-imperceptible, adversarial perturbations to the inputs. Questions such as how to test corner-case behaviors more thoroughly and whether all adversarial samples have been found remain unanswered. In the last few years, we have been working on bringing more engineering rigor into deep learning. Towards this goal, we have built five systems to test DNNs more thoroughly and verify the absence of adversarial samples for given datasets. These systems check a broad spectrum of properties (e.g., rotating an image should never change its classification) and find thousands of error-inducing samples for popular DNNs in critical domains (e.g., ImageNet, autonomous driving, and malware detection). Our DNN verifiers are also orders of magnitude (e.g., 5,000×) faster than similar tools. This article overviews our systems and discusses three open research challenges to hopefully inspire more future research towards testing and verifying DNNs.</description><identifier>ISSN: 0163-5980</identifier><identifier>DOI: 10.1145/3352020.3352030</identifier><language>eng</language><ispartof>Operating systems review, 2019-07, Vol.53 (1), p.59-67</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c156t-850dcf9abad23b6ac970ef69a41b41697f9e1e5cf4b9b698e2fdc0b1fb6ca4d53</citedby><cites>FETCH-LOGICAL-c156t-850dcf9abad23b6ac970ef69a41b41697f9e1e5cf4b9b698e2fdc0b1fb6ca4d53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,778,782,27911,27912</link.rule.ids></links><search><creatorcontrib>Pei, Kexin</creatorcontrib><creatorcontrib>Wang, Shiqi</creatorcontrib><creatorcontrib>Tian, Yuchi</creatorcontrib><creatorcontrib>Whitehouse, Justin</creatorcontrib><creatorcontrib>Vondrick, Carl</creatorcontrib><creatorcontrib>Cao, Yinzhi</creatorcontrib><creatorcontrib>Ray, Baishakhi</creatorcontrib><creatorcontrib>Jana, Suman</creatorcontrib><creatorcontrib>Yang, Junfeng</creatorcontrib><title>Bringing Engineering Rigor to Deep Learning</title><title>Operating systems review</title><description>Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including autonomous driving, robotics, and malware detection, where the correctness and predictability of a system on corner-case inputs are of great importance. Unfortunately, the common practice to validating a deep neural network (DNN) - measuring overall accuracy on a randomly selected test set - is not designed to surface corner-case errors. As recent work shows, even DNNs with state-of-the-art accuracy are easily fooled by human-imperceptible, adversarial perturbations to the inputs. Questions such as how to test corner-case behaviors more thoroughly and whether all adversarial samples have been found remain unanswered. In the last few years, we have been working on bringing more engineering rigor into deep learning. Towards this goal, we have built five systems to test DNNs more thoroughly and verify the absence of adversarial samples for given datasets. These systems check a broad spectrum of properties (e.g., rotating an image should never change its classification) and find thousands of error-inducing samples for popular DNNs in critical domains (e.g., ImageNet, autonomous driving, and malware detection). Our DNN verifiers are also orders of magnitude (e.g., 5,000×) faster than similar tools. This article overviews our systems and discusses three open research challenges to hopefully inspire more future research towards testing and verifying DNNs.</description><issn>0163-5980</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNotj0trAjEUhbOoUGu77jZ7Gb138nCybK2PwoAg7TokmRuZUh1J3PTf19GBAx_nLA58jL0izBClmguhSihhdqOABzYG1KJQpoJH9pTzDwBWqHHMpu-pPR2u4aseRH3l-_bQJX7p-AfRmdfk0uk6P7NRdL-ZXgZO2Pd69bXcFvVu87l8q4uASl-KSkETonHeNaXw2gWzAIraOIleojaLaAhJhSi98dpUVMYmgMfodXCyUWLC5vffkLqcE0V7Tu3RpT-LYHs_O_jZwU_8A_qORG0</recordid><startdate>20190725</startdate><enddate>20190725</enddate><creator>Pei, Kexin</creator><creator>Wang, Shiqi</creator><creator>Tian, Yuchi</creator><creator>Whitehouse, Justin</creator><creator>Vondrick, Carl</creator><creator>Cao, Yinzhi</creator><creator>Ray, Baishakhi</creator><creator>Jana, Suman</creator><creator>Yang, Junfeng</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20190725</creationdate><title>Bringing Engineering Rigor to Deep Learning</title><author>Pei, Kexin ; Wang, Shiqi ; Tian, Yuchi ; Whitehouse, Justin ; Vondrick, Carl ; Cao, Yinzhi ; Ray, Baishakhi ; Jana, Suman ; Yang, Junfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c156t-850dcf9abad23b6ac970ef69a41b41697f9e1e5cf4b9b698e2fdc0b1fb6ca4d53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Pei, Kexin</creatorcontrib><creatorcontrib>Wang, Shiqi</creatorcontrib><creatorcontrib>Tian, Yuchi</creatorcontrib><creatorcontrib>Whitehouse, Justin</creatorcontrib><creatorcontrib>Vondrick, Carl</creatorcontrib><creatorcontrib>Cao, Yinzhi</creatorcontrib><creatorcontrib>Ray, Baishakhi</creatorcontrib><creatorcontrib>Jana, Suman</creatorcontrib><creatorcontrib>Yang, Junfeng</creatorcontrib><collection>CrossRef</collection><jtitle>Operating systems review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pei, Kexin</au><au>Wang, Shiqi</au><au>Tian, Yuchi</au><au>Whitehouse, Justin</au><au>Vondrick, Carl</au><au>Cao, Yinzhi</au><au>Ray, Baishakhi</au><au>Jana, Suman</au><au>Yang, Junfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bringing Engineering Rigor to Deep Learning</atitle><jtitle>Operating systems review</jtitle><date>2019-07-25</date><risdate>2019</risdate><volume>53</volume><issue>1</issue><spage>59</spage><epage>67</epage><pages>59-67</pages><issn>0163-5980</issn><abstract>Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including autonomous driving, robotics, and malware detection, where the correctness and predictability of a system on corner-case inputs are of great importance. Unfortunately, the common practice to validating a deep neural network (DNN) - measuring overall accuracy on a randomly selected test set - is not designed to surface corner-case errors. As recent work shows, even DNNs with state-of-the-art accuracy are easily fooled by human-imperceptible, adversarial perturbations to the inputs. Questions such as how to test corner-case behaviors more thoroughly and whether all adversarial samples have been found remain unanswered. In the last few years, we have been working on bringing more engineering rigor into deep learning. Towards this goal, we have built five systems to test DNNs more thoroughly and verify the absence of adversarial samples for given datasets. These systems check a broad spectrum of properties (e.g., rotating an image should never change its classification) and find thousands of error-inducing samples for popular DNNs in critical domains (e.g., ImageNet, autonomous driving, and malware detection). Our DNN verifiers are also orders of magnitude (e.g., 5,000×) faster than similar tools. This article overviews our systems and discusses three open research challenges to hopefully inspire more future research towards testing and verifying DNNs.</abstract><doi>10.1145/3352020.3352030</doi><tpages>9</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0163-5980
ispartof Operating systems review, 2019-07, Vol.53 (1), p.59-67
issn 0163-5980
language eng
recordid cdi_crossref_primary_10_1145_3352020_3352030
source ACM Digital Library Complete
title Bringing Engineering Rigor to Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T20%3A02%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bringing%20Engineering%20Rigor%20to%20Deep%20Learning&rft.jtitle=Operating%20systems%20review&rft.au=Pei,%20Kexin&rft.date=2019-07-25&rft.volume=53&rft.issue=1&rft.spage=59&rft.epage=67&rft.pages=59-67&rft.issn=0163-5980&rft_id=info:doi/10.1145/3352020.3352030&rft_dat=%3Ccrossref%3E10_1145_3352020_3352030%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true