Neural Network Verification using Residual Reasoning

With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Elboher, Yizhak Yisrael, Cohen, Elazar, Katz, Guy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Elboher, Yizhak Yisrael
Cohen, Elazar
Katz, Guy
description With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but these typically suffer from severe scalability limitations. Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network. A shortcoming of such a scheme is that if verifying the smaller network fails, the verifier needs to perform a refinement step that increases the size of the network being verified, and then start verifying the new network from scratch - effectively "wasting" its earlier work on verifying the smaller network. In this paper, we present an enhancement to abstraction-based verification of neural networks, by using residual reasoning: the process of utilizing information acquired when verifying an abstract network, in order to expedite the verification of a refined network. In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly, and allows it to focus on areas where bugs might be discovered. We implemented our approach as an extension to the Marabou verifier, and obtained promising results.
doi_str_mv 10.48550/arxiv.2208.03083
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2208_03083</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2208_03083</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-98b35ad59560812b67bf484e1fbb5eabb8255b60e6901e91cd9d34f69c8a04bd3</originalsourceid><addsrcrecordid>eNotzktrwkAUBeDZdFG0P6Cr5g8kvZN5OLMU6QtEQcRtuDdzpwxqUiamj39fa10dOBwOnxD3EirtjIFHzN_ps6prcBUocOpW6BWPGQ_Fik9ffd4XO84pphZPqe-KcUjde7HhIYXxvNkwDn13rqbiJuJh4LtrTsT2-Wm7eC2X65e3xXxZop2p0jtSBoPxxoKTNdkZRe00y0hkGIlcbQxZYOtBspdt8EHpaH3rEDQFNREP_7cXdvOR0xHzT_PHby589QuKqz_F</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Neural Network Verification using Residual Reasoning</title><source>arXiv.org</source><creator>Elboher, Yizhak Yisrael ; Cohen, Elazar ; Katz, Guy</creator><creatorcontrib>Elboher, Yizhak Yisrael ; Cohen, Elazar ; Katz, Guy</creatorcontrib><description>With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but these typically suffer from severe scalability limitations. Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network. A shortcoming of such a scheme is that if verifying the smaller network fails, the verifier needs to perform a refinement step that increases the size of the network being verified, and then start verifying the new network from scratch - effectively "wasting" its earlier work on verifying the smaller network. In this paper, we present an enhancement to abstraction-based verification of neural networks, by using residual reasoning: the process of utilizing information acquired when verifying an abstract network, in order to expedite the verification of a refined network. In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly, and allows it to focus on areas where bugs might be discovered. We implemented our approach as an extension to the Marabou verifier, and obtained promising results.</description><identifier>DOI: 10.48550/arxiv.2208.03083</identifier><language>eng</language><subject>Computer Science - Neural and Evolutionary Computing</subject><creationdate>2022-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2208.03083$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2208.03083$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Elboher, Yizhak Yisrael</creatorcontrib><creatorcontrib>Cohen, Elazar</creatorcontrib><creatorcontrib>Katz, Guy</creatorcontrib><title>Neural Network Verification using Residual Reasoning</title><description>With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but these typically suffer from severe scalability limitations. Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network. A shortcoming of such a scheme is that if verifying the smaller network fails, the verifier needs to perform a refinement step that increases the size of the network being verified, and then start verifying the new network from scratch - effectively "wasting" its earlier work on verifying the smaller network. In this paper, we present an enhancement to abstraction-based verification of neural networks, by using residual reasoning: the process of utilizing information acquired when verifying an abstract network, in order to expedite the verification of a refined network. In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly, and allows it to focus on areas where bugs might be discovered. We implemented our approach as an extension to the Marabou verifier, and obtained promising results.</description><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzktrwkAUBeDZdFG0P6Cr5g8kvZN5OLMU6QtEQcRtuDdzpwxqUiamj39fa10dOBwOnxD3EirtjIFHzN_ps6prcBUocOpW6BWPGQ_Fik9ffd4XO84pphZPqe-KcUjde7HhIYXxvNkwDn13rqbiJuJh4LtrTsT2-Wm7eC2X65e3xXxZop2p0jtSBoPxxoKTNdkZRe00y0hkGIlcbQxZYOtBspdt8EHpaH3rEDQFNREP_7cXdvOR0xHzT_PHby589QuKqz_F</recordid><startdate>20220805</startdate><enddate>20220805</enddate><creator>Elboher, Yizhak Yisrael</creator><creator>Cohen, Elazar</creator><creator>Katz, Guy</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220805</creationdate><title>Neural Network Verification using Residual Reasoning</title><author>Elboher, Yizhak Yisrael ; Cohen, Elazar ; Katz, Guy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-98b35ad59560812b67bf484e1fbb5eabb8255b60e6901e91cd9d34f69c8a04bd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Elboher, Yizhak Yisrael</creatorcontrib><creatorcontrib>Cohen, Elazar</creatorcontrib><creatorcontrib>Katz, Guy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Elboher, Yizhak Yisrael</au><au>Cohen, Elazar</au><au>Katz, Guy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Network Verification using Residual Reasoning</atitle><date>2022-08-05</date><risdate>2022</risdate><abstract>With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but these typically suffer from severe scalability limitations. Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network. A shortcoming of such a scheme is that if verifying the smaller network fails, the verifier needs to perform a refinement step that increases the size of the network being verified, and then start verifying the new network from scratch - effectively "wasting" its earlier work on verifying the smaller network. In this paper, we present an enhancement to abstraction-based verification of neural networks, by using residual reasoning: the process of utilizing information acquired when verifying an abstract network, in order to expedite the verification of a refined network. In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly, and allows it to focus on areas where bugs might be discovered. We implemented our approach as an extension to the Marabou verifier, and obtained promising results.</abstract><doi>10.48550/arxiv.2208.03083</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2208.03083
ispartof
issn
language eng
recordid cdi_arxiv_primary_2208_03083
source arXiv.org
subjects Computer Science - Neural and Evolutionary Computing
title Neural Network Verification using Residual Reasoning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T13%3A03%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Network%20Verification%20using%20Residual%20Reasoning&rft.au=Elboher,%20Yizhak%20Yisrael&rft.date=2022-08-05&rft_id=info:doi/10.48550/arxiv.2208.03083&rft_dat=%3Carxiv_GOX%3E2208_03083%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true