Evaluating AI systems under uncertain ground truth: a case study in dermatology

For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Stutz, David, Cemgil, Ali Taylan, Roy, Abhijit Guha, Matejovicova, Tatiana, Barsbey, Melih, Strachan, Patricia, Schaekermann, Mike, Freyberg, Jan, Rikhye, Rajeev, Freeman, Beverly, Matos, Javier Perez, Telang, Umesh, Webster, Dale R, Liu, Yuan, Corrado, Greg S, Matias, Yossi, Kohli, Pushmeet, Liu, Yun, Doucet, Arnaud, Karthikesalingam, Alan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Stutz, David
Cemgil, Ali Taylan
Roy, Abhijit Guha
Matejovicova, Tatiana
Barsbey, Melih
Strachan, Patricia
Schaekermann, Mike
Freyberg, Jan
Rikhye, Rajeev
Freeman, Beverly
Matos, Javier Perez
Telang, Umesh
Webster, Dale R
Liu, Yuan
Corrado, Greg S
Matias, Yossi
Kohli, Pushmeet
Liu, Yun
Doucet, Arnaud
Karthikesalingam, Alan
description For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called plausibilities, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator reliability. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates.
doi_str_mv 10.48550/arxiv.2307.02191
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_02191</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_02191</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-9fdab31027ef76f8d5e187c1a6a8137ca0a12dfa9a951508d00c05e2e404f9293</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwKr-gYRrO45tdlVVoFKlbrqPLn6ESHkg26nI3xMKmxlpZjTSIeSJQVlpKeEZ43d3LbkAVQJnht2T8-GK_Yy5G1u6O9K0pOyHROfR-biq9TFjN9I2TmtEc5zz5wtFajF5mvLsFrq263bAPPVTuzyQu4B98o__viGX18Nl_16czm_H_e5UYK1YYYLDD8GAKx9UHbSTnmllGdaomVAWARl3AQ0aySRoB2BBeu4rqILhRmzI9u_2RtR8xW7AuDS_ZM2NTPwAK4ZJaQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Evaluating AI systems under uncertain ground truth: a case study in dermatology</title><source>arXiv.org</source><creator>Stutz, David ; Cemgil, Ali Taylan ; Roy, Abhijit Guha ; Matejovicova, Tatiana ; Barsbey, Melih ; Strachan, Patricia ; Schaekermann, Mike ; Freyberg, Jan ; Rikhye, Rajeev ; Freeman, Beverly ; Matos, Javier Perez ; Telang, Umesh ; Webster, Dale R ; Liu, Yuan ; Corrado, Greg S ; Matias, Yossi ; Kohli, Pushmeet ; Liu, Yun ; Doucet, Arnaud ; Karthikesalingam, Alan</creator><creatorcontrib>Stutz, David ; Cemgil, Ali Taylan ; Roy, Abhijit Guha ; Matejovicova, Tatiana ; Barsbey, Melih ; Strachan, Patricia ; Schaekermann, Mike ; Freyberg, Jan ; Rikhye, Rajeev ; Freeman, Beverly ; Matos, Javier Perez ; Telang, Umesh ; Webster, Dale R ; Liu, Yuan ; Corrado, Greg S ; Matias, Yossi ; Kohli, Pushmeet ; Liu, Yun ; Doucet, Arnaud ; Karthikesalingam, Alan</creatorcontrib><description>For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called plausibilities, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator reliability. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates.</description><identifier>DOI: 10.48550/arxiv.2307.02191</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Statistics - Machine Learning ; Statistics - Methodology</subject><creationdate>2023-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.02191$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.02191$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Stutz, David</creatorcontrib><creatorcontrib>Cemgil, Ali Taylan</creatorcontrib><creatorcontrib>Roy, Abhijit Guha</creatorcontrib><creatorcontrib>Matejovicova, Tatiana</creatorcontrib><creatorcontrib>Barsbey, Melih</creatorcontrib><creatorcontrib>Strachan, Patricia</creatorcontrib><creatorcontrib>Schaekermann, Mike</creatorcontrib><creatorcontrib>Freyberg, Jan</creatorcontrib><creatorcontrib>Rikhye, Rajeev</creatorcontrib><creatorcontrib>Freeman, Beverly</creatorcontrib><creatorcontrib>Matos, Javier Perez</creatorcontrib><creatorcontrib>Telang, Umesh</creatorcontrib><creatorcontrib>Webster, Dale R</creatorcontrib><creatorcontrib>Liu, Yuan</creatorcontrib><creatorcontrib>Corrado, Greg S</creatorcontrib><creatorcontrib>Matias, Yossi</creatorcontrib><creatorcontrib>Kohli, Pushmeet</creatorcontrib><creatorcontrib>Liu, Yun</creatorcontrib><creatorcontrib>Doucet, Arnaud</creatorcontrib><creatorcontrib>Karthikesalingam, Alan</creatorcontrib><title>Evaluating AI systems under uncertain ground truth: a case study in dermatology</title><description>For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called plausibilities, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator reliability. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><subject>Statistics - Methodology</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwKr-gYRrO45tdlVVoFKlbrqPLn6ESHkg26nI3xMKmxlpZjTSIeSJQVlpKeEZ43d3LbkAVQJnht2T8-GK_Yy5G1u6O9K0pOyHROfR-biq9TFjN9I2TmtEc5zz5wtFajF5mvLsFrq263bAPPVTuzyQu4B98o__viGX18Nl_16czm_H_e5UYK1YYYLDD8GAKx9UHbSTnmllGdaomVAWARl3AQ0aySRoB2BBeu4rqILhRmzI9u_2RtR8xW7AuDS_ZM2NTPwAK4ZJaQ</recordid><startdate>20230705</startdate><enddate>20230705</enddate><creator>Stutz, David</creator><creator>Cemgil, Ali Taylan</creator><creator>Roy, Abhijit Guha</creator><creator>Matejovicova, Tatiana</creator><creator>Barsbey, Melih</creator><creator>Strachan, Patricia</creator><creator>Schaekermann, Mike</creator><creator>Freyberg, Jan</creator><creator>Rikhye, Rajeev</creator><creator>Freeman, Beverly</creator><creator>Matos, Javier Perez</creator><creator>Telang, Umesh</creator><creator>Webster, Dale R</creator><creator>Liu, Yuan</creator><creator>Corrado, Greg S</creator><creator>Matias, Yossi</creator><creator>Kohli, Pushmeet</creator><creator>Liu, Yun</creator><creator>Doucet, Arnaud</creator><creator>Karthikesalingam, Alan</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20230705</creationdate><title>Evaluating AI systems under uncertain ground truth: a case study in dermatology</title><author>Stutz, David ; Cemgil, Ali Taylan ; Roy, Abhijit Guha ; Matejovicova, Tatiana ; Barsbey, Melih ; Strachan, Patricia ; Schaekermann, Mike ; Freyberg, Jan ; Rikhye, Rajeev ; Freeman, Beverly ; Matos, Javier Perez ; Telang, Umesh ; Webster, Dale R ; Liu, Yuan ; Corrado, Greg S ; Matias, Yossi ; Kohli, Pushmeet ; Liu, Yun ; Doucet, Arnaud ; Karthikesalingam, Alan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-9fdab31027ef76f8d5e187c1a6a8137ca0a12dfa9a951508d00c05e2e404f9293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><topic>Statistics - Methodology</topic><toplevel>online_resources</toplevel><creatorcontrib>Stutz, David</creatorcontrib><creatorcontrib>Cemgil, Ali Taylan</creatorcontrib><creatorcontrib>Roy, Abhijit Guha</creatorcontrib><creatorcontrib>Matejovicova, Tatiana</creatorcontrib><creatorcontrib>Barsbey, Melih</creatorcontrib><creatorcontrib>Strachan, Patricia</creatorcontrib><creatorcontrib>Schaekermann, Mike</creatorcontrib><creatorcontrib>Freyberg, Jan</creatorcontrib><creatorcontrib>Rikhye, Rajeev</creatorcontrib><creatorcontrib>Freeman, Beverly</creatorcontrib><creatorcontrib>Matos, Javier Perez</creatorcontrib><creatorcontrib>Telang, Umesh</creatorcontrib><creatorcontrib>Webster, Dale R</creatorcontrib><creatorcontrib>Liu, Yuan</creatorcontrib><creatorcontrib>Corrado, Greg S</creatorcontrib><creatorcontrib>Matias, Yossi</creatorcontrib><creatorcontrib>Kohli, Pushmeet</creatorcontrib><creatorcontrib>Liu, Yun</creatorcontrib><creatorcontrib>Doucet, Arnaud</creatorcontrib><creatorcontrib>Karthikesalingam, Alan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Stutz, David</au><au>Cemgil, Ali Taylan</au><au>Roy, Abhijit Guha</au><au>Matejovicova, Tatiana</au><au>Barsbey, Melih</au><au>Strachan, Patricia</au><au>Schaekermann, Mike</au><au>Freyberg, Jan</au><au>Rikhye, Rajeev</au><au>Freeman, Beverly</au><au>Matos, Javier Perez</au><au>Telang, Umesh</au><au>Webster, Dale R</au><au>Liu, Yuan</au><au>Corrado, Greg S</au><au>Matias, Yossi</au><au>Kohli, Pushmeet</au><au>Liu, Yun</au><au>Doucet, Arnaud</au><au>Karthikesalingam, Alan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluating AI systems under uncertain ground truth: a case study in dermatology</atitle><date>2023-07-05</date><risdate>2023</risdate><abstract>For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called plausibilities, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator reliability. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates.</abstract><doi>10.48550/arxiv.2307.02191</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2307.02191
ispartof
issn
language eng
recordid cdi_arxiv_primary_2307_02191
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Statistics - Machine Learning
Statistics - Methodology
title Evaluating AI systems under uncertain ground truth: a case study in dermatology
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T14%3A43%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluating%20AI%20systems%20under%20uncertain%20ground%20truth:%20a%20case%20study%20in%20dermatology&rft.au=Stutz,%20David&rft.date=2023-07-05&rft_id=info:doi/10.48550/arxiv.2307.02191&rft_dat=%3Carxiv_GOX%3E2307_02191%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true