Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks
Lipschitz constrained networks have gathered considerable attention in the deep learning community, with usages ranging from Wasserstein distance estimation to the training of certifiably robust classifiers. However they remain commonly considered as less accurate, and their properties in learning a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Béthune, Louis Boissin, Thibaut Serrurier, Mathieu Mamalet, Franck Friedrich, Corentin González-Sanz, Alberto |
description | Lipschitz constrained networks have gathered considerable attention in the
deep learning community, with usages ranging from Wasserstein distance
estimation to the training of certifiably robust classifiers. However they
remain commonly considered as less accurate, and their properties in learning
are still not fully understood. In this paper we clarify the matter: when it
comes to classification 1-Lipschitz neural networks enjoy several advantages
over their unconstrained counterpart. First, we show that these networks are as
accurate as classical ones, and can fit arbitrarily difficult boundaries. Then,
relying on a robustness metric that reflects operational needs we characterize
the most robust classifier: the WGAN discriminator. Next, we show that
1-Lipschitz neural networks generalize well under milder assumptions. Finally,
we show that hyper-parameters of the loss are crucial for controlling the
accuracy-robustness trade-off. We conclude that they exhibit appealing
properties to pave the way toward provably accurate, and provably robust neural
networks. |
doi_str_mv | 10.48550/arxiv.2104.05097 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>hal_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2104_05097</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>oai_HAL_hal_03872080v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1017-50e190692fe81be28f06ebeeb298e268cc6aaa71861c28db22b55b123872656e3</originalsourceid><addsrcrecordid>eNo90DFPwzAQhuEsDKjwA5jwypBgO7XjsFUVUKRIMMDCEp2TC7VI7ch2CuHX07SI6ZNOj254k-SK0WyphKC34L_NPuOMLjMqaFmcJ-8vMBGIEW00zpLoyORGT3oXwh0ZbYs-RLCtsR9kZ0LjbIPDLAMB7cZIWFqZITRbE3-IxdFDf5j45fxnuEjOOugDXv7tInl7uH9db9Lq-fFpvapSYJQVqaDISipL3qFiGrnqqESNqHmpkEvVNBIACqYka7hqNedaCM14rgouhcR8kdyc_m6hrwdvduCn2oGpN6uqnm90plTRPTvY65M9lvjXc5H6WCT_BYjXW_k</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks</title><source>arXiv.org</source><creator>Béthune, Louis ; Boissin, Thibaut ; Serrurier, Mathieu ; Mamalet, Franck ; Friedrich, Corentin ; González-Sanz, Alberto</creator><creatorcontrib>Béthune, Louis ; Boissin, Thibaut ; Serrurier, Mathieu ; Mamalet, Franck ; Friedrich, Corentin ; González-Sanz, Alberto</creatorcontrib><description>Lipschitz constrained networks have gathered considerable attention in the
deep learning community, with usages ranging from Wasserstein distance
estimation to the training of certifiably robust classifiers. However they
remain commonly considered as less accurate, and their properties in learning
are still not fully understood. In this paper we clarify the matter: when it
comes to classification 1-Lipschitz neural networks enjoy several advantages
over their unconstrained counterpart. First, we show that these networks are as
accurate as classical ones, and can fit arbitrarily difficult boundaries. Then,
relying on a robustness metric that reflects operational needs we characterize
the most robust classifier: the WGAN discriminator. Next, we show that
1-Lipschitz neural networks generalize well under milder assumptions. Finally,
we show that hyper-parameters of the loss are crucial for controlling the
accuracy-robustness trade-off. We conclude that they exhibit appealing
properties to pave the way toward provably accurate, and provably robust neural
networks.</description><identifier>DOI: 10.48550/arxiv.2104.05097</identifier><language>eng</language><subject>Artificial Intelligence ; Computer Science ; Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2022</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-8959-1091</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,309,776,881,4036</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2104.05097$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2104.05097$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://hal.science/hal-03872080$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Béthune, Louis</creatorcontrib><creatorcontrib>Boissin, Thibaut</creatorcontrib><creatorcontrib>Serrurier, Mathieu</creatorcontrib><creatorcontrib>Mamalet, Franck</creatorcontrib><creatorcontrib>Friedrich, Corentin</creatorcontrib><creatorcontrib>González-Sanz, Alberto</creatorcontrib><title>Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks</title><description>Lipschitz constrained networks have gathered considerable attention in the
deep learning community, with usages ranging from Wasserstein distance
estimation to the training of certifiably robust classifiers. However they
remain commonly considered as less accurate, and their properties in learning
are still not fully understood. In this paper we clarify the matter: when it
comes to classification 1-Lipschitz neural networks enjoy several advantages
over their unconstrained counterpart. First, we show that these networks are as
accurate as classical ones, and can fit arbitrarily difficult boundaries. Then,
relying on a robustness metric that reflects operational needs we characterize
the most robust classifier: the WGAN discriminator. Next, we show that
1-Lipschitz neural networks generalize well under milder assumptions. Finally,
we show that hyper-parameters of the loss are crucial for controlling the
accuracy-robustness trade-off. We conclude that they exhibit appealing
properties to pave the way toward provably accurate, and provably robust neural
networks.</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>GOX</sourceid><recordid>eNo90DFPwzAQhuEsDKjwA5jwypBgO7XjsFUVUKRIMMDCEp2TC7VI7ch2CuHX07SI6ZNOj254k-SK0WyphKC34L_NPuOMLjMqaFmcJ-8vMBGIEW00zpLoyORGT3oXwh0ZbYs-RLCtsR9kZ0LjbIPDLAMB7cZIWFqZITRbE3-IxdFDf5j45fxnuEjOOugDXv7tInl7uH9db9Lq-fFpvapSYJQVqaDISipL3qFiGrnqqESNqHmpkEvVNBIACqYka7hqNedaCM14rgouhcR8kdyc_m6hrwdvduCn2oGpN6uqnm90plTRPTvY65M9lvjXc5H6WCT_BYjXW_k</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Béthune, Louis</creator><creator>Boissin, Thibaut</creator><creator>Serrurier, Mathieu</creator><creator>Mamalet, Franck</creator><creator>Friedrich, Corentin</creator><creator>González-Sanz, Alberto</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-8959-1091</orcidid></search><sort><creationdate>2022</creationdate><title>Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks</title><author>Béthune, Louis ; Boissin, Thibaut ; Serrurier, Mathieu ; Mamalet, Franck ; Friedrich, Corentin ; González-Sanz, Alberto</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1017-50e190692fe81be28f06ebeeb298e268cc6aaa71861c28db22b55b123872656e3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Béthune, Louis</creatorcontrib><creatorcontrib>Boissin, Thibaut</creatorcontrib><creatorcontrib>Serrurier, Mathieu</creatorcontrib><creatorcontrib>Mamalet, Franck</creatorcontrib><creatorcontrib>Friedrich, Corentin</creatorcontrib><creatorcontrib>González-Sanz, Alberto</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><collection>Hyper Article en Ligne (HAL)</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Béthune, Louis</au><au>Boissin, Thibaut</au><au>Serrurier, Mathieu</au><au>Mamalet, Franck</au><au>Friedrich, Corentin</au><au>González-Sanz, Alberto</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks</atitle><date>2022</date><risdate>2022</risdate><abstract>Lipschitz constrained networks have gathered considerable attention in the
deep learning community, with usages ranging from Wasserstein distance
estimation to the training of certifiably robust classifiers. However they
remain commonly considered as less accurate, and their properties in learning
are still not fully understood. In this paper we clarify the matter: when it
comes to classification 1-Lipschitz neural networks enjoy several advantages
over their unconstrained counterpart. First, we show that these networks are as
accurate as classical ones, and can fit arbitrarily difficult boundaries. Then,
relying on a robustness metric that reflects operational needs we characterize
the most robust classifier: the WGAN discriminator. Next, we show that
1-Lipschitz neural networks generalize well under milder assumptions. Finally,
we show that hyper-parameters of the loss are crucial for controlling the
accuracy-robustness trade-off. We conclude that they exhibit appealing
properties to pave the way toward provably accurate, and provably robust neural
networks.</abstract><doi>10.48550/arxiv.2104.05097</doi><orcidid>https://orcid.org/0000-0002-8959-1091</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2104.05097 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2104_05097 |
source | arXiv.org |
subjects | Artificial Intelligence Computer Science Computer Science - Artificial Intelligence Computer Science - Learning Statistics - Machine Learning |
title | Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T08%3A11%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Pay%20attention%20to%20your%20loss:%20understanding%20misconceptions%20about%201-Lipschitz%20neural%20networks&rft.au=B%C3%A9thune,%20Louis&rft.date=2022&rft_id=info:doi/10.48550/arxiv.2104.05097&rft_dat=%3Chal_GOX%3Eoai_HAL_hal_03872080v1%3C/hal_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |