One Versus all for deep Neural Network Incertitude (OVNNI) quantification

Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022-01
Hauptverfasser: Franchi, Gianni, Bursuc, Andrei, Aldea, Emanuel, Dubuisson, Séverine, Bloch, Isabelle
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title IEEE access
container_volume
creator Franchi, Gianni
Bursuc, Andrei
Aldea, Emanuel
Dubuisson, Séverine
Bloch, Isabelle
description Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.
doi_str_mv 10.1109/access.2021.3138978
format Article
fullrecord <record><control><sourceid>hal</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03097063v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>oai_HAL_hal_03097063v1</sourcerecordid><originalsourceid>FETCH-hal_primary_oai_HAL_hal_03097063v13</originalsourceid><addsrcrecordid>eNqVir0KwjAURoMoWNQncMloh9akwf6MIkoLUhdxLZd6i9HYapIqvr0VHFz9lvNxOIRMOfM5Z8kcyhKN8QMWcF9wESdR3CNOwMPEEwsR9n_-kEyMObNucacWkUOyXY30gNq0hoJStGo0PSLeaI6tBtXBPht9oVldorbStkeks90hzzOX3luoraxkCVY29ZgMKlAGJ1-OiLtZ71epdwJV3LS8gn4VDcgiXW6Lj2OCJRELxYOLf9o3sR9H_g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>One Versus all for deep Neural Network Incertitude (OVNNI) quantification</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Franchi, Gianni ; Bursuc, Andrei ; Aldea, Emanuel ; Dubuisson, Séverine ; Bloch, Isabelle</creator><creatorcontrib>Franchi, Gianni ; Bursuc, Andrei ; Aldea, Emanuel ; Dubuisson, Séverine ; Bloch, Isabelle</creatorcontrib><description>Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/access.2021.3138978</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial Intelligence ; Computer Science ; Computer Vision and Pattern Recognition ; Machine Learning ; Mathematics ; Statistics</subject><ispartof>IEEE access, 2022-01</ispartof><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-2184-1381 ; 0000-0001-7306-4134 ; 0000-0001-7065-4809 ; 0000-0002-6984-1532 ; 0000-0002-6984-1532 ; 0000-0001-7065-4809 ; 0000-0001-7306-4134 ; 0000-0002-2184-1381</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,860,881,27903,27904</link.rule.ids><backlink>$$Uhttps://hal.science/hal-03097063$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Franchi, Gianni</creatorcontrib><creatorcontrib>Bursuc, Andrei</creatorcontrib><creatorcontrib>Aldea, Emanuel</creatorcontrib><creatorcontrib>Dubuisson, Séverine</creatorcontrib><creatorcontrib>Bloch, Isabelle</creatorcontrib><title>One Versus all for deep Neural Network Incertitude (OVNNI) quantification</title><title>IEEE access</title><description>Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Computer Vision and Pattern Recognition</subject><subject>Machine Learning</subject><subject>Mathematics</subject><subject>Statistics</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNqVir0KwjAURoMoWNQncMloh9akwf6MIkoLUhdxLZd6i9HYapIqvr0VHFz9lvNxOIRMOfM5Z8kcyhKN8QMWcF9wESdR3CNOwMPEEwsR9n_-kEyMObNucacWkUOyXY30gNq0hoJStGo0PSLeaI6tBtXBPht9oVldorbStkeks90hzzOX3luoraxkCVY29ZgMKlAGJ1-OiLtZ71epdwJV3LS8gn4VDcgiXW6Lj2OCJRELxYOLf9o3sR9H_g</recordid><startdate>20220103</startdate><enddate>20220103</enddate><creator>Franchi, Gianni</creator><creator>Bursuc, Andrei</creator><creator>Aldea, Emanuel</creator><creator>Dubuisson, Séverine</creator><creator>Bloch, Isabelle</creator><general>IEEE</general><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0002-2184-1381</orcidid><orcidid>https://orcid.org/0000-0001-7306-4134</orcidid><orcidid>https://orcid.org/0000-0001-7065-4809</orcidid><orcidid>https://orcid.org/0000-0002-6984-1532</orcidid><orcidid>https://orcid.org/0000-0002-6984-1532</orcidid><orcidid>https://orcid.org/0000-0001-7065-4809</orcidid><orcidid>https://orcid.org/0000-0001-7306-4134</orcidid><orcidid>https://orcid.org/0000-0002-2184-1381</orcidid></search><sort><creationdate>20220103</creationdate><title>One Versus all for deep Neural Network Incertitude (OVNNI) quantification</title><author>Franchi, Gianni ; Bursuc, Andrei ; Aldea, Emanuel ; Dubuisson, Séverine ; Bloch, Isabelle</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-hal_primary_oai_HAL_hal_03097063v13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Computer Vision and Pattern Recognition</topic><topic>Machine Learning</topic><topic>Mathematics</topic><topic>Statistics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Franchi, Gianni</creatorcontrib><creatorcontrib>Bursuc, Andrei</creatorcontrib><creatorcontrib>Aldea, Emanuel</creatorcontrib><creatorcontrib>Dubuisson, Séverine</creatorcontrib><creatorcontrib>Bloch, Isabelle</creatorcontrib><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Franchi, Gianni</au><au>Bursuc, Andrei</au><au>Aldea, Emanuel</au><au>Dubuisson, Séverine</au><au>Bloch, Isabelle</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>One Versus all for deep Neural Network Incertitude (OVNNI) quantification</atitle><jtitle>IEEE access</jtitle><date>2022-01-03</date><risdate>2022</risdate><issn>2169-3536</issn><eissn>2169-3536</eissn><abstract>Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.</abstract><pub>IEEE</pub><doi>10.1109/access.2021.3138978</doi><orcidid>https://orcid.org/0000-0002-2184-1381</orcidid><orcidid>https://orcid.org/0000-0001-7306-4134</orcidid><orcidid>https://orcid.org/0000-0001-7065-4809</orcidid><orcidid>https://orcid.org/0000-0002-6984-1532</orcidid><orcidid>https://orcid.org/0000-0002-6984-1532</orcidid><orcidid>https://orcid.org/0000-0001-7065-4809</orcidid><orcidid>https://orcid.org/0000-0001-7306-4134</orcidid><orcidid>https://orcid.org/0000-0002-2184-1381</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2022-01
issn 2169-3536
2169-3536
language eng
recordid cdi_hal_primary_oai_HAL_hal_03097063v1
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Artificial Intelligence
Computer Science
Computer Vision and Pattern Recognition
Machine Learning
Mathematics
Statistics
title One Versus all for deep Neural Network Incertitude (OVNNI) quantification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T18%3A09%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=One%20Versus%20all%20for%20deep%20Neural%20Network%20Incertitude%20(OVNNI)%20quantification&rft.jtitle=IEEE%20access&rft.au=Franchi,%20Gianni&rft.date=2022-01-03&rft.issn=2169-3536&rft.eissn=2169-3536&rft_id=info:doi/10.1109/access.2021.3138978&rft_dat=%3Chal%3Eoai_HAL_hal_03097063v1%3C/hal%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true