Scalable and Modular Robustness Analysis of Deep Neural Networks

As neural networks are trained to be deeper and larger, the scalability of neural network analyzers is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we pro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-08
Hauptverfasser: Zhong, Yuyi, Quang-Trung Ta, Luo, Tianzuo, Zhang, Fanlong, Siau-Cheng Khoo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhong, Yuyi
Quang-Trung Ta
Luo, Tianzuo
Zhang, Fanlong
Siau-Cheng Khoo
description As neural networks are trained to be deeper and larger, the scalability of neural network analyzers is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. For example, BBPoly can analyze really large neural networks like SkipNet or ResNet which contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2565273030</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2565273030</sourcerecordid><originalsourceid>FETCH-proquest_journals_25652730303</originalsourceid><addsrcrecordid>eNqNjMsKwjAQAIMgWLT_sOC5EBPTelR84EUP6r1s7RasIanZBvHv7cEPkDnMZZiRSJTWi2y1VGoiUuZWSqnyQhmjE7G-3tFiZQnQ1XDydbQY4OKryL0jZtg4tB9-MPgGdkQdnCkGtIP6tw9Pnolxg5Yp_Xkq5of9bXvMuuBfkbgvWx_DMOFSmdyoQsuB_6ovEVE4uA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2565273030</pqid></control><display><type>article</type><title>Scalable and Modular Robustness Analysis of Deep Neural Networks</title><source>Freely Accessible Journals</source><creator>Zhong, Yuyi ; Quang-Trung Ta ; Luo, Tianzuo ; Zhang, Fanlong ; Siau-Cheng Khoo</creator><creatorcontrib>Zhong, Yuyi ; Quang-Trung Ta ; Luo, Tianzuo ; Zhang, Fanlong ; Siau-Cheng Khoo</creatorcontrib><description>As neural networks are trained to be deeper and larger, the scalability of neural network analyzers is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. For example, BBPoly can analyze really large neural networks like SkipNet or ResNet which contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Network analysers ; Neural networks</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Zhong, Yuyi</creatorcontrib><creatorcontrib>Quang-Trung Ta</creatorcontrib><creatorcontrib>Luo, Tianzuo</creatorcontrib><creatorcontrib>Zhang, Fanlong</creatorcontrib><creatorcontrib>Siau-Cheng Khoo</creatorcontrib><title>Scalable and Modular Robustness Analysis of Deep Neural Networks</title><title>arXiv.org</title><description>As neural networks are trained to be deeper and larger, the scalability of neural network analyzers is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. For example, BBPoly can analyze really large neural networks like SkipNet or ResNet which contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.</description><subject>Artificial neural networks</subject><subject>Network analysers</subject><subject>Neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjMsKwjAQAIMgWLT_sOC5EBPTelR84EUP6r1s7RasIanZBvHv7cEPkDnMZZiRSJTWi2y1VGoiUuZWSqnyQhmjE7G-3tFiZQnQ1XDydbQY4OKryL0jZtg4tB9-MPgGdkQdnCkGtIP6tw9Pnolxg5Yp_Xkq5of9bXvMuuBfkbgvWx_DMOFSmdyoQsuB_6ovEVE4uA</recordid><startdate>20210831</startdate><enddate>20210831</enddate><creator>Zhong, Yuyi</creator><creator>Quang-Trung Ta</creator><creator>Luo, Tianzuo</creator><creator>Zhang, Fanlong</creator><creator>Siau-Cheng Khoo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210831</creationdate><title>Scalable and Modular Robustness Analysis of Deep Neural Networks</title><author>Zhong, Yuyi ; Quang-Trung Ta ; Luo, Tianzuo ; Zhang, Fanlong ; Siau-Cheng Khoo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25652730303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Network analysers</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhong, Yuyi</creatorcontrib><creatorcontrib>Quang-Trung Ta</creatorcontrib><creatorcontrib>Luo, Tianzuo</creatorcontrib><creatorcontrib>Zhang, Fanlong</creatorcontrib><creatorcontrib>Siau-Cheng Khoo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhong, Yuyi</au><au>Quang-Trung Ta</au><au>Luo, Tianzuo</au><au>Zhang, Fanlong</au><au>Siau-Cheng Khoo</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Scalable and Modular Robustness Analysis of Deep Neural Networks</atitle><jtitle>arXiv.org</jtitle><date>2021-08-31</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>As neural networks are trained to be deeper and larger, the scalability of neural network analyzers is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. For example, BBPoly can analyze really large neural networks like SkipNet or ResNet which contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2565273030
source Freely Accessible Journals
subjects Artificial neural networks
Network analysers
Neural networks
title Scalable and Modular Robustness Analysis of Deep Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T06%3A05%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Scalable%20and%20Modular%20Robustness%20Analysis%20of%20Deep%20Neural%20Networks&rft.jtitle=arXiv.org&rft.au=Zhong,%20Yuyi&rft.date=2021-08-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2565273030%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2565273030&rft_id=info:pmid/&rfr_iscdi=true