Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples
There is great interest in using formal methods to guarantee the reliability of deep neural networks. However, these techniques may also be used to implant carefully selected input-output pairs. We present initial results on a novel technique for using SMT solvers to fine tune the weights of a ReLU...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Papusha, Ivan Wu, Rosa Brulé, Joshua Kouskoulas, Yanni Genin, Daniel Schmidt, Aurora |
description | There is great interest in using formal methods to guarantee the reliability
of deep neural networks. However, these techniques may also be used to implant
carefully selected input-output pairs. We present initial results on a novel
technique for using SMT solvers to fine tune the weights of a ReLU neural
network to guarantee outcomes on a finite set of particular examples. This
procedure can be used to ensure performance on key examples, but it could also
be used to insert difficult-to-find incorrect examples that trigger unexpected
performance. We demonstrate this approach by fine tuning an MNIST network to
incorrectly classify a particular image and discuss the potential for the
approach to compromise reliability of freely-shared machine learning models. |
doi_str_mv | 10.48550/arxiv.2008.01204 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2008_01204</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2008_01204</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-f0e548d75de59a676609c87f7803a41d36d8f7199d3ed5fccf76b3463fb94233</originalsourceid><addsrcrecordid>eNotj99KwzAcRnPjhUwfwCvzAq1p87feSdnmYKiw3Zc0-UWCbTLSVLe3d5teHTjwfXAQeqhIyRTn5Emno_8ua0JUSaqasFs0bIKJKYHJuD_hNoYpp9lkH8MzXvkAeD8HHz7xG8xJD2fkn5i-JuxiwutZJx0ygMUfkM5m1MEAjuGy9BnwDvKEo8PLox4PA0x36MbpYYL7fy7QbrXct6_F9n29aV-2hRaSFY4AZ8pKboE3ZyMEaYySTipCNassFVY5WTWNpWC5M8ZJ0VMmqOsbVlO6QI9_r9fa7pD8qNOpu1R312r6C_s4U10</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples</title><source>arXiv.org</source><creator>Papusha, Ivan ; Wu, Rosa ; Brulé, Joshua ; Kouskoulas, Yanni ; Genin, Daniel ; Schmidt, Aurora</creator><creatorcontrib>Papusha, Ivan ; Wu, Rosa ; Brulé, Joshua ; Kouskoulas, Yanni ; Genin, Daniel ; Schmidt, Aurora</creatorcontrib><description>There is great interest in using formal methods to guarantee the reliability
of deep neural networks. However, these techniques may also be used to implant
carefully selected input-output pairs. We present initial results on a novel
technique for using SMT solvers to fine tune the weights of a ReLU neural
network to guarantee outcomes on a finite set of particular examples. This
procedure can be used to ensure performance on key examples, but it could also
be used to insert difficult-to-find incorrect examples that trigger unexpected
performance. We demonstrate this approach by fine tuning an MNIST network to
incorrectly classify a particular image and discuss the potential for the
approach to compromise reliability of freely-shared machine learning models.</description><identifier>DOI: 10.48550/arxiv.2008.01204</identifier><language>eng</language><subject>Computer Science - Learning ; Mathematics - Optimization and Control ; Statistics - Machine Learning</subject><creationdate>2020-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2008.01204$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2008.01204$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Papusha, Ivan</creatorcontrib><creatorcontrib>Wu, Rosa</creatorcontrib><creatorcontrib>Brulé, Joshua</creatorcontrib><creatorcontrib>Kouskoulas, Yanni</creatorcontrib><creatorcontrib>Genin, Daniel</creatorcontrib><creatorcontrib>Schmidt, Aurora</creatorcontrib><title>Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples</title><description>There is great interest in using formal methods to guarantee the reliability
of deep neural networks. However, these techniques may also be used to implant
carefully selected input-output pairs. We present initial results on a novel
technique for using SMT solvers to fine tune the weights of a ReLU neural
network to guarantee outcomes on a finite set of particular examples. This
procedure can be used to ensure performance on key examples, but it could also
be used to insert difficult-to-find incorrect examples that trigger unexpected
performance. We demonstrate this approach by fine tuning an MNIST network to
incorrectly classify a particular image and discuss the potential for the
approach to compromise reliability of freely-shared machine learning models.</description><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj99KwzAcRnPjhUwfwCvzAq1p87feSdnmYKiw3Zc0-UWCbTLSVLe3d5teHTjwfXAQeqhIyRTn5Emno_8ua0JUSaqasFs0bIKJKYHJuD_hNoYpp9lkH8MzXvkAeD8HHz7xG8xJD2fkn5i-JuxiwutZJx0ygMUfkM5m1MEAjuGy9BnwDvKEo8PLox4PA0x36MbpYYL7fy7QbrXct6_F9n29aV-2hRaSFY4AZ8pKboE3ZyMEaYySTipCNassFVY5WTWNpWC5M8ZJ0VMmqOsbVlO6QI9_r9fa7pD8qNOpu1R312r6C_s4U10</recordid><startdate>20200803</startdate><enddate>20200803</enddate><creator>Papusha, Ivan</creator><creator>Wu, Rosa</creator><creator>Brulé, Joshua</creator><creator>Kouskoulas, Yanni</creator><creator>Genin, Daniel</creator><creator>Schmidt, Aurora</creator><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200803</creationdate><title>Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples</title><author>Papusha, Ivan ; Wu, Rosa ; Brulé, Joshua ; Kouskoulas, Yanni ; Genin, Daniel ; Schmidt, Aurora</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-f0e548d75de59a676609c87f7803a41d36d8f7199d3ed5fccf76b3463fb94233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Papusha, Ivan</creatorcontrib><creatorcontrib>Wu, Rosa</creatorcontrib><creatorcontrib>Brulé, Joshua</creatorcontrib><creatorcontrib>Kouskoulas, Yanni</creatorcontrib><creatorcontrib>Genin, Daniel</creatorcontrib><creatorcontrib>Schmidt, Aurora</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Papusha, Ivan</au><au>Wu, Rosa</au><au>Brulé, Joshua</au><au>Kouskoulas, Yanni</au><au>Genin, Daniel</au><au>Schmidt, Aurora</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples</atitle><date>2020-08-03</date><risdate>2020</risdate><abstract>There is great interest in using formal methods to guarantee the reliability
of deep neural networks. However, these techniques may also be used to implant
carefully selected input-output pairs. We present initial results on a novel
technique for using SMT solvers to fine tune the weights of a ReLU neural
network to guarantee outcomes on a finite set of particular examples. This
procedure can be used to ensure performance on key examples, but it could also
be used to insert difficult-to-find incorrect examples that trigger unexpected
performance. We demonstrate this approach by fine tuning an MNIST network to
incorrectly classify a particular image and discuss the potential for the
approach to compromise reliability of freely-shared machine learning models.</abstract><doi>10.48550/arxiv.2008.01204</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2008.01204 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2008_01204 |
source | arXiv.org |
subjects | Computer Science - Learning Mathematics - Optimization and Control Statistics - Machine Learning |
title | Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-20T04%3A09%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Incorrect%20by%20Construction:%20Fine%20Tuning%20Neural%20Networks%20for%20Guaranteed%20Performance%20on%20Finite%20Sets%20of%20Examples&rft.au=Papusha,%20Ivan&rft.date=2020-08-03&rft_id=info:doi/10.48550/arxiv.2008.01204&rft_dat=%3Carxiv_GOX%3E2008_01204%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |