DeepMC: DNN test sample optimization method jointly guided by misclassification and coverage
Large-scale and high-quality test samples are extremely scarce in deep neural networks(DNN) testing. Existing test sample optimization methods exhibit the problem of low efficiency and low neuron coverage of optimized test samples, which consistently fail to expose erroneous behaviors of DNNs with c...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-06, Vol.53 (12), p.15787-15801 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 15801 |
---|---|
container_issue | 12 |
container_start_page | 15787 |
container_title | Applied intelligence (Dordrecht, Netherlands) |
container_volume | 53 |
creator | Sun, Jiaze Li, Juan Wen, Sulei |
description | Large-scale and high-quality test samples are extremely scarce in deep neural networks(DNN) testing. Existing test sample optimization methods exhibit the problem of low efficiency and low neuron coverage of optimized test samples, which consistently fail to expose erroneous behaviors of DNNs with corner-case inputs. In this paper, we propose DeepMC, an image classification DNN test sample optimization method jointly guided by misclassification and coverage. Specifically, we select the seed sample from the original test samples according to the misclassification probability. To maximize the misclassification probability and neuron coverage, we construct the joint optimization problem for the seed samples and use the gradient ascent to solve the joint optimization problem. We evaluate this method on two well-known datasets and prevalent image classification DNN models. Compare with DeepXplore, a DL white-box testing framework, DeepMC does not require multiple DNN models with similar functions for cross-referencing, saves 90% time consumption on MNIST, averagely covers 1.87% more neurons, and optimized test samples with more than 69% attack success rate. In addition, the test sample optimized by DeepMC can also be applied to optimize the robustness of the corresponding DNN with an average 3% improvement of the model’s accuracy. |
doi_str_mv | 10.1007/s10489-022-04323-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2821148947</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2821148947</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-979b574690da1be8fb8ff6e76745bc6526c013f99cf909a321bb3176ddd3fab23</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMoWKt_wFXA9WheM5m4k9YX1LpRcCGEPGtKZzImU6H-ekdHcOfqbr5zDvcD4BSjc4wQv8gYsVoUiJACMUpowfbABJecFpwJvg8mSBBWVJV4OQRHOa8RQpQiPAGvc-e6h9klnC-XsHe5h1k13cbB2PWhCZ-qD7GFjevfooXrGNp-s4OrbbDOQr2DTchmo3IOPpgRVa2FJn64pFbuGBx4tcnu5PdOwfPN9dPsrlg83t7PrhaFoVj0heBCl5xVAlmFtau9rr2vHK84K7WpSlIZhKkXwniBhKIEa00xr6y11CtN6BScjb1diu_b4Qm5jtvUDpOS1ATjQQ3jA0VGyqSYc3Jedik0Ku0kRvLbohwtysGi_LEo2RCiYygPcLty6a_6n9QXrjR1xA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2821148947</pqid></control><display><type>article</type><title>DeepMC: DNN test sample optimization method jointly guided by misclassification and coverage</title><source>SpringerLink Journals - AutoHoldings</source><creator>Sun, Jiaze ; Li, Juan ; Wen, Sulei</creator><creatorcontrib>Sun, Jiaze ; Li, Juan ; Wen, Sulei</creatorcontrib><description>Large-scale and high-quality test samples are extremely scarce in deep neural networks(DNN) testing. Existing test sample optimization methods exhibit the problem of low efficiency and low neuron coverage of optimized test samples, which consistently fail to expose erroneous behaviors of DNNs with corner-case inputs. In this paper, we propose DeepMC, an image classification DNN test sample optimization method jointly guided by misclassification and coverage. Specifically, we select the seed sample from the original test samples according to the misclassification probability. To maximize the misclassification probability and neuron coverage, we construct the joint optimization problem for the seed samples and use the gradient ascent to solve the joint optimization problem. We evaluate this method on two well-known datasets and prevalent image classification DNN models. Compare with DeepXplore, a DL white-box testing framework, DeepMC does not require multiple DNN models with similar functions for cross-referencing, saves 90% time consumption on MNIST, averagely covers 1.87% more neurons, and optimized test samples with more than 69% attack success rate. In addition, the test sample optimized by DeepMC can also be applied to optimize the robustness of the corresponding DNN with an average 3% improvement of the model’s accuracy.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-04323-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Computer Science ; Image classification ; Machines ; Manufacturing ; Mechanical Engineering ; Model accuracy ; Optimization ; Processes</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2023-06, Vol.53 (12), p.15787-15801</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-979b574690da1be8fb8ff6e76745bc6526c013f99cf909a321bb3176ddd3fab23</citedby><cites>FETCH-LOGICAL-c319t-979b574690da1be8fb8ff6e76745bc6526c013f99cf909a321bb3176ddd3fab23</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-04323-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-04323-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,41486,42555,51317</link.rule.ids></links><search><creatorcontrib>Sun, Jiaze</creatorcontrib><creatorcontrib>Li, Juan</creatorcontrib><creatorcontrib>Wen, Sulei</creatorcontrib><title>DeepMC: DNN test sample optimization method jointly guided by misclassification and coverage</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Large-scale and high-quality test samples are extremely scarce in deep neural networks(DNN) testing. Existing test sample optimization methods exhibit the problem of low efficiency and low neuron coverage of optimized test samples, which consistently fail to expose erroneous behaviors of DNNs with corner-case inputs. In this paper, we propose DeepMC, an image classification DNN test sample optimization method jointly guided by misclassification and coverage. Specifically, we select the seed sample from the original test samples according to the misclassification probability. To maximize the misclassification probability and neuron coverage, we construct the joint optimization problem for the seed samples and use the gradient ascent to solve the joint optimization problem. We evaluate this method on two well-known datasets and prevalent image classification DNN models. Compare with DeepXplore, a DL white-box testing framework, DeepMC does not require multiple DNN models with similar functions for cross-referencing, saves 90% time consumption on MNIST, averagely covers 1.87% more neurons, and optimized test samples with more than 69% attack success rate. In addition, the test sample optimized by DeepMC can also be applied to optimize the robustness of the corresponding DNN with an average 3% improvement of the model’s accuracy.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Science</subject><subject>Image classification</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Model accuracy</subject><subject>Optimization</subject><subject>Processes</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kEtLAzEUhYMoWKt_wFXA9WheM5m4k9YX1LpRcCGEPGtKZzImU6H-ekdHcOfqbr5zDvcD4BSjc4wQv8gYsVoUiJACMUpowfbABJecFpwJvg8mSBBWVJV4OQRHOa8RQpQiPAGvc-e6h9klnC-XsHe5h1k13cbB2PWhCZ-qD7GFjevfooXrGNp-s4OrbbDOQr2DTchmo3IOPpgRVa2FJn64pFbuGBx4tcnu5PdOwfPN9dPsrlg83t7PrhaFoVj0heBCl5xVAlmFtau9rr2vHK84K7WpSlIZhKkXwniBhKIEa00xr6y11CtN6BScjb1diu_b4Qm5jtvUDpOS1ATjQQ3jA0VGyqSYc3Jedik0Ku0kRvLbohwtysGi_LEo2RCiYygPcLty6a_6n9QXrjR1xA</recordid><startdate>20230601</startdate><enddate>20230601</enddate><creator>Sun, Jiaze</creator><creator>Li, Juan</creator><creator>Wen, Sulei</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope></search><sort><creationdate>20230601</creationdate><title>DeepMC: DNN test sample optimization method jointly guided by misclassification and coverage</title><author>Sun, Jiaze ; Li, Juan ; Wen, Sulei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-979b574690da1be8fb8ff6e76745bc6526c013f99cf909a321bb3176ddd3fab23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Science</topic><topic>Image classification</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Model accuracy</topic><topic>Optimization</topic><topic>Processes</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sun, Jiaze</creatorcontrib><creatorcontrib>Li, Juan</creatorcontrib><creatorcontrib>Wen, Sulei</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Jiaze</au><au>Li, Juan</au><au>Wen, Sulei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeepMC: DNN test sample optimization method jointly guided by misclassification and coverage</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2023-06-01</date><risdate>2023</risdate><volume>53</volume><issue>12</issue><spage>15787</spage><epage>15801</epage><pages>15787-15801</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Large-scale and high-quality test samples are extremely scarce in deep neural networks(DNN) testing. Existing test sample optimization methods exhibit the problem of low efficiency and low neuron coverage of optimized test samples, which consistently fail to expose erroneous behaviors of DNNs with corner-case inputs. In this paper, we propose DeepMC, an image classification DNN test sample optimization method jointly guided by misclassification and coverage. Specifically, we select the seed sample from the original test samples according to the misclassification probability. To maximize the misclassification probability and neuron coverage, we construct the joint optimization problem for the seed samples and use the gradient ascent to solve the joint optimization problem. We evaluate this method on two well-known datasets and prevalent image classification DNN models. Compare with DeepXplore, a DL white-box testing framework, DeepMC does not require multiple DNN models with similar functions for cross-referencing, saves 90% time consumption on MNIST, averagely covers 1.87% more neurons, and optimized test samples with more than 69% attack success rate. In addition, the test sample optimized by DeepMC can also be applied to optimize the robustness of the corresponding DNN with an average 3% improvement of the model’s accuracy.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-04323-4</doi><tpages>15</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0924-669X |
ispartof | Applied intelligence (Dordrecht, Netherlands), 2023-06, Vol.53 (12), p.15787-15801 |
issn | 0924-669X 1573-7497 |
language | eng |
recordid | cdi_proquest_journals_2821148947 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Artificial neural networks Computer Science Image classification Machines Manufacturing Mechanical Engineering Model accuracy Optimization Processes |
title | DeepMC: DNN test sample optimization method jointly guided by misclassification and coverage |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T07%3A51%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeepMC:%20DNN%20test%20sample%20optimization%20method%20jointly%20guided%20by%20misclassification%20and%20coverage&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Sun,%20Jiaze&rft.date=2023-06-01&rft.volume=53&rft.issue=12&rft.spage=15787&rft.epage=15801&rft.pages=15787-15801&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-04323-4&rft_dat=%3Cproquest_cross%3E2821148947%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2821148947&rft_id=info:pmid/&rfr_iscdi=true |