Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features
Intensive research has been conducted on the verification and validation of deep neural networks (DNNs), aiming to understand if, and how, DNNs can be applied to safety critical applications. However, existing verification and validation techniques are limited by their scalability, over both the siz...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Berthier, Nicolas Alshareef, Amany Sharp, James Schewe, Sven Huang, Xiaowei |
description | Intensive research has been conducted on the verification and validation of
deep neural networks (DNNs), aiming to understand if, and how, DNNs can be
applied to safety critical applications. However, existing verification and
validation techniques are limited by their scalability, over both the size of
the DNN and the size of the dataset. In this paper, we propose a novel
abstraction method which abstracts a DNN and a dataset into a Bayesian network
(BN). We make use of dimensionality reduction techniques to identify hidden
features that have been learned by hidden layers of the DNN, and associate each
hidden feature with a node of the BN. On this BN, we can conduct probabilistic
inference to understand the behaviours of the DNN processing data. More
importantly, we can derive a runtime monitoring approach to detect in
operational time rare inputs and covariate shift of the input data. We can also
adapt existing structural coverage-guided testing techniques (i.e., based on
low-level elements of the DNN such as neurons), in order to generate test cases
that better exercise hidden features. We implement and evaluate the BN
abstraction technique using our DeepConcolic tool available at
https://github.com/TrustAI/DeepConcolic. |
doi_str_mv | 10.48550/arxiv.2103.03704 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_03704</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_03704</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-b9a1b8df714d6391ff695b0e15842a7701ec92b05a08e29dce71b16a8612d4813</originalsourceid><addsrcrecordid>eNo1j8FOg0AURWfjwlQ_wJXzA-A8GJhhibW1Jo0u7J68YR7ppBTIABb-3oq6OslN7s09jD2ACKVOEvGEfnJfYQQiDkWshLxlLjf94LEcXNtwbCz_nM-mrV3JNxOV4xK3FX8h6vg7jR7rK4ZL6089v7jhyJ9xpt5hw_Ou8-3kzvjf2TlrqeFbwmH01N-xmwrrnu7_uGKH7eaw3gX7j9e3db4PMFUyMBmC0bZSIG0aZ1BVaZYYQZBoGaFSAqjMIiMSFJqizJakwECKOoXISg3xij3-zi6uReevj_xc_DgXi3P8DUdOUoU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features</title><source>arXiv.org</source><creator>Berthier, Nicolas ; Alshareef, Amany ; Sharp, James ; Schewe, Sven ; Huang, Xiaowei</creator><creatorcontrib>Berthier, Nicolas ; Alshareef, Amany ; Sharp, James ; Schewe, Sven ; Huang, Xiaowei</creatorcontrib><description>Intensive research has been conducted on the verification and validation of
deep neural networks (DNNs), aiming to understand if, and how, DNNs can be
applied to safety critical applications. However, existing verification and
validation techniques are limited by their scalability, over both the size of
the DNN and the size of the dataset. In this paper, we propose a novel
abstraction method which abstracts a DNN and a dataset into a Bayesian network
(BN). We make use of dimensionality reduction techniques to identify hidden
features that have been learned by hidden layers of the DNN, and associate each
hidden feature with a node of the BN. On this BN, we can conduct probabilistic
inference to understand the behaviours of the DNN processing data. More
importantly, we can derive a runtime monitoring approach to detect in
operational time rare inputs and covariate shift of the input data. We can also
adapt existing structural coverage-guided testing techniques (i.e., based on
low-level elements of the DNN such as neurons), in order to generate test cases
that better exercise hidden features. We implement and evaluate the BN
abstraction technique using our DeepConcolic tool available at
https://github.com/TrustAI/DeepConcolic.</description><identifier>DOI: 10.48550/arxiv.2103.03704</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Software Engineering</subject><creationdate>2021-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.03704$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.03704$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Berthier, Nicolas</creatorcontrib><creatorcontrib>Alshareef, Amany</creatorcontrib><creatorcontrib>Sharp, James</creatorcontrib><creatorcontrib>Schewe, Sven</creatorcontrib><creatorcontrib>Huang, Xiaowei</creatorcontrib><title>Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features</title><description>Intensive research has been conducted on the verification and validation of
deep neural networks (DNNs), aiming to understand if, and how, DNNs can be
applied to safety critical applications. However, existing verification and
validation techniques are limited by their scalability, over both the size of
the DNN and the size of the dataset. In this paper, we propose a novel
abstraction method which abstracts a DNN and a dataset into a Bayesian network
(BN). We make use of dimensionality reduction techniques to identify hidden
features that have been learned by hidden layers of the DNN, and associate each
hidden feature with a node of the BN. On this BN, we can conduct probabilistic
inference to understand the behaviours of the DNN processing data. More
importantly, we can derive a runtime monitoring approach to detect in
operational time rare inputs and covariate shift of the input data. We can also
adapt existing structural coverage-guided testing techniques (i.e., based on
low-level elements of the DNN such as neurons), in order to generate test cases
that better exercise hidden features. We implement and evaluate the BN
abstraction technique using our DeepConcolic tool available at
https://github.com/TrustAI/DeepConcolic.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Software Engineering</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j8FOg0AURWfjwlQ_wJXzA-A8GJhhibW1Jo0u7J68YR7ppBTIABb-3oq6OslN7s09jD2ACKVOEvGEfnJfYQQiDkWshLxlLjf94LEcXNtwbCz_nM-mrV3JNxOV4xK3FX8h6vg7jR7rK4ZL6089v7jhyJ9xpt5hw_Ou8-3kzvjf2TlrqeFbwmH01N-xmwrrnu7_uGKH7eaw3gX7j9e3db4PMFUyMBmC0bZSIG0aZ1BVaZYYQZBoGaFSAqjMIiMSFJqizJakwECKOoXISg3xij3-zi6uReevj_xc_DgXi3P8DUdOUoU</recordid><startdate>20210305</startdate><enddate>20210305</enddate><creator>Berthier, Nicolas</creator><creator>Alshareef, Amany</creator><creator>Sharp, James</creator><creator>Schewe, Sven</creator><creator>Huang, Xiaowei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210305</creationdate><title>Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features</title><author>Berthier, Nicolas ; Alshareef, Amany ; Sharp, James ; Schewe, Sven ; Huang, Xiaowei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-b9a1b8df714d6391ff695b0e15842a7701ec92b05a08e29dce71b16a8612d4813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Software Engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Berthier, Nicolas</creatorcontrib><creatorcontrib>Alshareef, Amany</creatorcontrib><creatorcontrib>Sharp, James</creatorcontrib><creatorcontrib>Schewe, Sven</creatorcontrib><creatorcontrib>Huang, Xiaowei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Berthier, Nicolas</au><au>Alshareef, Amany</au><au>Sharp, James</au><au>Schewe, Sven</au><au>Huang, Xiaowei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features</atitle><date>2021-03-05</date><risdate>2021</risdate><abstract>Intensive research has been conducted on the verification and validation of
deep neural networks (DNNs), aiming to understand if, and how, DNNs can be
applied to safety critical applications. However, existing verification and
validation techniques are limited by their scalability, over both the size of
the DNN and the size of the dataset. In this paper, we propose a novel
abstraction method which abstracts a DNN and a dataset into a Bayesian network
(BN). We make use of dimensionality reduction techniques to identify hidden
features that have been learned by hidden layers of the DNN, and associate each
hidden feature with a node of the BN. On this BN, we can conduct probabilistic
inference to understand the behaviours of the DNN processing data. More
importantly, we can derive a runtime monitoring approach to detect in
operational time rare inputs and covariate shift of the input data. We can also
adapt existing structural coverage-guided testing techniques (i.e., based on
low-level elements of the DNN such as neurons), in order to generate test cases
that better exercise hidden features. We implement and evaluate the BN
abstraction technique using our DeepConcolic tool available at
https://github.com/TrustAI/DeepConcolic.</abstract><doi>10.48550/arxiv.2103.03704</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2103.03704 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2103_03704 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Software Engineering |
title | Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T07%3A55%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Abstraction%20and%20Symbolic%20Execution%20of%20Deep%20Neural%20Networks%20with%20Bayesian%20Approximation%20of%20Hidden%20Features&rft.au=Berthier,%20Nicolas&rft.date=2021-03-05&rft_id=info:doi/10.48550/arxiv.2103.03704&rft_dat=%3Carxiv_GOX%3E2103_03704%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |