The Penalized Inverse Probability Measure for Conformal Classification

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF, Jun 2024, Seattle, United States The deployment of safe and trustworthy machine learning systems, and particularly complex black box neural networks, in real-world applications requires reliable and certified guarantees...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Melki, Paul, Bombrun, Lionel, Diallo, Boubacar, Dias, Jérôme, da Costa, Jean-Pierre
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Melki, Paul
Bombrun, Lionel
Diallo, Boubacar
Dias, Jérôme
da Costa, Jean-Pierre
description IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF, Jun 2024, Seattle, United States The deployment of safe and trustworthy machine learning systems, and particularly complex black box neural networks, in real-world applications requires reliable and certified guarantees on their performance. The conformal prediction framework offers such formal guarantees by transforming any point into a set predictor with valid, finite-set, guarantees on the coverage of the true at a chosen level of confidence. Central to this methodology is the notion of the nonconformity score function that assigns to each example a measure of ''strangeness'' in comparison with the previously seen observations. While the coverage guarantees are maintained regardless of the nonconformity measure, the point predictor and the dataset, previous research has shown that the performance of a conformal model, as measured by its efficiency (the average size of the predicted sets) and its informativeness (the proportion of prediction sets that are singletons), is influenced by the choice of the nonconformity score function. The current work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness. Through toy examples and empirical results on the task of crop and weed image classification in agricultural robotics, the current work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency.
doi_str_mv 10.48550/arxiv.2406.08884
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_08884</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_08884</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_088843</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zOwsLAw4WRwC8lIVQhIzUvMyaxKTVHwzCtLLSoGihTlJyUmZeZkllQq-KYmFpcWpSqk5RcpOOfnAancxBwF55zE4uLMtMzkxJLM_DweBta0xJziVF4ozc0g7-Ya4uyhC7YxvqAoMzexqDIeZHM82GZjwioAiS85Ug</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The Penalized Inverse Probability Measure for Conformal Classification</title><source>arXiv.org</source><creator>Melki, Paul ; Bombrun, Lionel ; Diallo, Boubacar ; Dias, Jérôme ; da Costa, Jean-Pierre</creator><creatorcontrib>Melki, Paul ; Bombrun, Lionel ; Diallo, Boubacar ; Dias, Jérôme ; da Costa, Jean-Pierre</creatorcontrib><description>IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF, Jun 2024, Seattle, United States The deployment of safe and trustworthy machine learning systems, and particularly complex black box neural networks, in real-world applications requires reliable and certified guarantees on their performance. The conformal prediction framework offers such formal guarantees by transforming any point into a set predictor with valid, finite-set, guarantees on the coverage of the true at a chosen level of confidence. Central to this methodology is the notion of the nonconformity score function that assigns to each example a measure of ''strangeness'' in comparison with the previously seen observations. While the coverage guarantees are maintained regardless of the nonconformity measure, the point predictor and the dataset, previous research has shown that the performance of a conformal model, as measured by its efficiency (the average size of the predicted sets) and its informativeness (the proportion of prediction sets that are singletons), is influenced by the choice of the nonconformity score function. The current work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness. Through toy examples and empirical results on the task of crop and weed image classification in agricultural robotics, the current work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency.</description><identifier>DOI: 10.48550/arxiv.2406.08884</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.08884$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.08884$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Melki, Paul</creatorcontrib><creatorcontrib>Bombrun, Lionel</creatorcontrib><creatorcontrib>Diallo, Boubacar</creatorcontrib><creatorcontrib>Dias, Jérôme</creatorcontrib><creatorcontrib>da Costa, Jean-Pierre</creatorcontrib><title>The Penalized Inverse Probability Measure for Conformal Classification</title><description>IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF, Jun 2024, Seattle, United States The deployment of safe and trustworthy machine learning systems, and particularly complex black box neural networks, in real-world applications requires reliable and certified guarantees on their performance. The conformal prediction framework offers such formal guarantees by transforming any point into a set predictor with valid, finite-set, guarantees on the coverage of the true at a chosen level of confidence. Central to this methodology is the notion of the nonconformity score function that assigns to each example a measure of ''strangeness'' in comparison with the previously seen observations. While the coverage guarantees are maintained regardless of the nonconformity measure, the point predictor and the dataset, previous research has shown that the performance of a conformal model, as measured by its efficiency (the average size of the predicted sets) and its informativeness (the proportion of prediction sets that are singletons), is influenced by the choice of the nonconformity score function. The current work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness. Through toy examples and empirical results on the task of crop and weed image classification in agricultural robotics, the current work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zOwsLAw4WRwC8lIVQhIzUvMyaxKTVHwzCtLLSoGihTlJyUmZeZkllQq-KYmFpcWpSqk5RcpOOfnAancxBwF55zE4uLMtMzkxJLM_DweBta0xJziVF4ozc0g7-Ya4uyhC7YxvqAoMzexqDIeZHM82GZjwioAiS85Ug</recordid><startdate>20240613</startdate><enddate>20240613</enddate><creator>Melki, Paul</creator><creator>Bombrun, Lionel</creator><creator>Diallo, Boubacar</creator><creator>Dias, Jérôme</creator><creator>da Costa, Jean-Pierre</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20240613</creationdate><title>The Penalized Inverse Probability Measure for Conformal Classification</title><author>Melki, Paul ; Bombrun, Lionel ; Diallo, Boubacar ; Dias, Jérôme ; da Costa, Jean-Pierre</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_088843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Melki, Paul</creatorcontrib><creatorcontrib>Bombrun, Lionel</creatorcontrib><creatorcontrib>Diallo, Boubacar</creatorcontrib><creatorcontrib>Dias, Jérôme</creatorcontrib><creatorcontrib>da Costa, Jean-Pierre</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Melki, Paul</au><au>Bombrun, Lionel</au><au>Diallo, Boubacar</au><au>Dias, Jérôme</au><au>da Costa, Jean-Pierre</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Penalized Inverse Probability Measure for Conformal Classification</atitle><date>2024-06-13</date><risdate>2024</risdate><abstract>IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF, Jun 2024, Seattle, United States The deployment of safe and trustworthy machine learning systems, and particularly complex black box neural networks, in real-world applications requires reliable and certified guarantees on their performance. The conformal prediction framework offers such formal guarantees by transforming any point into a set predictor with valid, finite-set, guarantees on the coverage of the true at a chosen level of confidence. Central to this methodology is the notion of the nonconformity score function that assigns to each example a measure of ''strangeness'' in comparison with the previously seen observations. While the coverage guarantees are maintained regardless of the nonconformity measure, the point predictor and the dataset, previous research has shown that the performance of a conformal model, as measured by its efficiency (the average size of the predicted sets) and its informativeness (the proportion of prediction sets that are singletons), is influenced by the choice of the nonconformity score function. The current work introduces the Penalized Inverse Probability (PIP) nonconformity score, and its regularized version RePIP, that allow the joint optimization of both efficiency and informativeness. Through toy examples and empirical results on the task of crop and weed image classification in agricultural robotics, the current work shows how PIP-based conformal classifiers exhibit precisely the desired behavior in comparison with other nonconformity measures and strike a good balance between informativeness and efficiency.</abstract><doi>10.48550/arxiv.2406.08884</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.08884
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_08884
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Statistics - Machine Learning
title The Penalized Inverse Probability Measure for Conformal Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T10%3A56%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Penalized%20Inverse%20Probability%20Measure%20for%20Conformal%20Classification&rft.au=Melki,%20Paul&rft.date=2024-06-13&rft_id=info:doi/10.48550/arxiv.2406.08884&rft_dat=%3Carxiv_GOX%3E2406_08884%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true