Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation?
How can we test AI performance? This question seems trivial, but it isn't. Standard benchmarks often have problems such as in-distribution and small-size test sets, oversimplified metrics, unfair comparisons, and short-term outcome pressure. As a consequence, good performance on standard benchm...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bassi, Pedro R. A. S Li, Wenxuan Tang, Yucheng Isensee, Fabian Wang, Zifu Chen, Jieneng Chou, Yu-Cheng Kirchhoff, Yannick Rokuss, Maximilian Huang, Ziyan Ye, Jin He, Junjun Wald, Tassilo Ulrich, Constantin Baumgartner, Michael Roy, Saikat Maier-Hein, Klaus H Jaeger, Paul Ye, Yiwen Xie, Yutong Zhang, Jianpeng Chen, Ziyang Xia, Yong Xing, Zhaohu Zhu, Lei Sadegheih, Yousef Bozorgpour, Afshin Kumari, Pratibha Azad, Reza Merhof, Dorit Shi, Pengcheng Ma, Ting Du, Yuxin Bai, Fan Huang, Tiejun Zhao, Bo Wang, Haonan Li, Xiaomeng Gu, Hanxue Dong, Haoyu Yang, Jichen Mazurowski, Maciej A Gupta, Saumya Wu, Linshan Zhuang, Jiaxin Chen, Hao Roth, Holger Xu, Daguang Blaschko, Matthew B Decherchi, Sergio Cavalli, Andrea Yuille, Alan L Zhou, Zongwei |
description | How can we test AI performance? This question seems trivial, but it isn't.
Standard benchmarks often have problems such as in-distribution and small-size
test sets, oversimplified metrics, unfair comparisons, and short-term outcome
pressure. As a consequence, good performance on standard benchmarks does not
guarantee success in real-world scenarios. To address these problems, we
present Touchstone, a large-scale collaborative segmentation benchmark of 9
types of abdominal organs. This benchmark is based on 5,195 training CT scans
from 76 hospitals around the world and 5,903 testing CT scans from 11
additional hospitals. This diverse test set enhances the statistical
significance of benchmark results and rigorously evaluates AI algorithms across
various out-of-distribution scenarios. We invited 14 inventors of 19 AI
algorithms to train their algorithms, while our team, as a third party,
independently evaluated these algorithms on three test sets. In addition, we
also evaluated pre-existing AI frameworks--which, differing from algorithms,
are more flexible and can support different algorithms--including MONAI from
NVIDIA, nnU-Net from DKFZ, and numerous other open-source frameworks. We are
committed to expanding this benchmark to encourage more innovation of AI
algorithms for the medical domain. |
doi_str_mv | 10.48550/arxiv.2411.03670 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_03670</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_03670</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_036703</originalsourceid><addsrcrecordid>eNqFjrEOgjAURbs4GPUDnHw_IBYBNS4GDUYHFyVhJA08aGNpTSlE_l4k7k53uCc5h5C5Sx1_FwR0xcxbtM7ad12HepstHZM81k3Ga6sVwhFVxitmnnsIDUKCoBVYjnAXJbeQsA4KbSBqmWyYFaqE8AqhLLURllf1cN4wFxmT8MCyQmV7TKvDlIwKJmuc_XZCFucoPl2WQ0_6MqK3dum3Kx26vP_EB-hdQtA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation?</title><source>arXiv.org</source><creator>Bassi, Pedro R. A. S ; Li, Wenxuan ; Tang, Yucheng ; Isensee, Fabian ; Wang, Zifu ; Chen, Jieneng ; Chou, Yu-Cheng ; Kirchhoff, Yannick ; Rokuss, Maximilian ; Huang, Ziyan ; Ye, Jin ; He, Junjun ; Wald, Tassilo ; Ulrich, Constantin ; Baumgartner, Michael ; Roy, Saikat ; Maier-Hein, Klaus H ; Jaeger, Paul ; Ye, Yiwen ; Xie, Yutong ; Zhang, Jianpeng ; Chen, Ziyang ; Xia, Yong ; Xing, Zhaohu ; Zhu, Lei ; Sadegheih, Yousef ; Bozorgpour, Afshin ; Kumari, Pratibha ; Azad, Reza ; Merhof, Dorit ; Shi, Pengcheng ; Ma, Ting ; Du, Yuxin ; Bai, Fan ; Huang, Tiejun ; Zhao, Bo ; Wang, Haonan ; Li, Xiaomeng ; Gu, Hanxue ; Dong, Haoyu ; Yang, Jichen ; Mazurowski, Maciej A ; Gupta, Saumya ; Wu, Linshan ; Zhuang, Jiaxin ; Chen, Hao ; Roth, Holger ; Xu, Daguang ; Blaschko, Matthew B ; Decherchi, Sergio ; Cavalli, Andrea ; Yuille, Alan L ; Zhou, Zongwei</creator><creatorcontrib>Bassi, Pedro R. A. S ; Li, Wenxuan ; Tang, Yucheng ; Isensee, Fabian ; Wang, Zifu ; Chen, Jieneng ; Chou, Yu-Cheng ; Kirchhoff, Yannick ; Rokuss, Maximilian ; Huang, Ziyan ; Ye, Jin ; He, Junjun ; Wald, Tassilo ; Ulrich, Constantin ; Baumgartner, Michael ; Roy, Saikat ; Maier-Hein, Klaus H ; Jaeger, Paul ; Ye, Yiwen ; Xie, Yutong ; Zhang, Jianpeng ; Chen, Ziyang ; Xia, Yong ; Xing, Zhaohu ; Zhu, Lei ; Sadegheih, Yousef ; Bozorgpour, Afshin ; Kumari, Pratibha ; Azad, Reza ; Merhof, Dorit ; Shi, Pengcheng ; Ma, Ting ; Du, Yuxin ; Bai, Fan ; Huang, Tiejun ; Zhao, Bo ; Wang, Haonan ; Li, Xiaomeng ; Gu, Hanxue ; Dong, Haoyu ; Yang, Jichen ; Mazurowski, Maciej A ; Gupta, Saumya ; Wu, Linshan ; Zhuang, Jiaxin ; Chen, Hao ; Roth, Holger ; Xu, Daguang ; Blaschko, Matthew B ; Decherchi, Sergio ; Cavalli, Andrea ; Yuille, Alan L ; Zhou, Zongwei</creatorcontrib><description>How can we test AI performance? This question seems trivial, but it isn't.
Standard benchmarks often have problems such as in-distribution and small-size
test sets, oversimplified metrics, unfair comparisons, and short-term outcome
pressure. As a consequence, good performance on standard benchmarks does not
guarantee success in real-world scenarios. To address these problems, we
present Touchstone, a large-scale collaborative segmentation benchmark of 9
types of abdominal organs. This benchmark is based on 5,195 training CT scans
from 76 hospitals around the world and 5,903 testing CT scans from 11
additional hospitals. This diverse test set enhances the statistical
significance of benchmark results and rigorously evaluates AI algorithms across
various out-of-distribution scenarios. We invited 14 inventors of 19 AI
algorithms to train their algorithms, while our team, as a third party,
independently evaluated these algorithms on three test sets. In addition, we
also evaluated pre-existing AI frameworks--which, differing from algorithms,
are more flexible and can support different algorithms--including MONAI from
NVIDIA, nnU-Net from DKFZ, and numerous other open-source frameworks. We are
committed to expanding this benchmark to encourage more innovation of AI
algorithms for the medical domain.</description><identifier>DOI: 10.48550/arxiv.2411.03670</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.03670$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.03670$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bassi, Pedro R. A. S</creatorcontrib><creatorcontrib>Li, Wenxuan</creatorcontrib><creatorcontrib>Tang, Yucheng</creatorcontrib><creatorcontrib>Isensee, Fabian</creatorcontrib><creatorcontrib>Wang, Zifu</creatorcontrib><creatorcontrib>Chen, Jieneng</creatorcontrib><creatorcontrib>Chou, Yu-Cheng</creatorcontrib><creatorcontrib>Kirchhoff, Yannick</creatorcontrib><creatorcontrib>Rokuss, Maximilian</creatorcontrib><creatorcontrib>Huang, Ziyan</creatorcontrib><creatorcontrib>Ye, Jin</creatorcontrib><creatorcontrib>He, Junjun</creatorcontrib><creatorcontrib>Wald, Tassilo</creatorcontrib><creatorcontrib>Ulrich, Constantin</creatorcontrib><creatorcontrib>Baumgartner, Michael</creatorcontrib><creatorcontrib>Roy, Saikat</creatorcontrib><creatorcontrib>Maier-Hein, Klaus H</creatorcontrib><creatorcontrib>Jaeger, Paul</creatorcontrib><creatorcontrib>Ye, Yiwen</creatorcontrib><creatorcontrib>Xie, Yutong</creatorcontrib><creatorcontrib>Zhang, Jianpeng</creatorcontrib><creatorcontrib>Chen, Ziyang</creatorcontrib><creatorcontrib>Xia, Yong</creatorcontrib><creatorcontrib>Xing, Zhaohu</creatorcontrib><creatorcontrib>Zhu, Lei</creatorcontrib><creatorcontrib>Sadegheih, Yousef</creatorcontrib><creatorcontrib>Bozorgpour, Afshin</creatorcontrib><creatorcontrib>Kumari, Pratibha</creatorcontrib><creatorcontrib>Azad, Reza</creatorcontrib><creatorcontrib>Merhof, Dorit</creatorcontrib><creatorcontrib>Shi, Pengcheng</creatorcontrib><creatorcontrib>Ma, Ting</creatorcontrib><creatorcontrib>Du, Yuxin</creatorcontrib><creatorcontrib>Bai, Fan</creatorcontrib><creatorcontrib>Huang, Tiejun</creatorcontrib><creatorcontrib>Zhao, Bo</creatorcontrib><creatorcontrib>Wang, Haonan</creatorcontrib><creatorcontrib>Li, Xiaomeng</creatorcontrib><creatorcontrib>Gu, Hanxue</creatorcontrib><creatorcontrib>Dong, Haoyu</creatorcontrib><creatorcontrib>Yang, Jichen</creatorcontrib><creatorcontrib>Mazurowski, Maciej A</creatorcontrib><creatorcontrib>Gupta, Saumya</creatorcontrib><creatorcontrib>Wu, Linshan</creatorcontrib><creatorcontrib>Zhuang, Jiaxin</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><creatorcontrib>Roth, Holger</creatorcontrib><creatorcontrib>Xu, Daguang</creatorcontrib><creatorcontrib>Blaschko, Matthew B</creatorcontrib><creatorcontrib>Decherchi, Sergio</creatorcontrib><creatorcontrib>Cavalli, Andrea</creatorcontrib><creatorcontrib>Yuille, Alan L</creatorcontrib><creatorcontrib>Zhou, Zongwei</creatorcontrib><title>Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation?</title><description>How can we test AI performance? This question seems trivial, but it isn't.
Standard benchmarks often have problems such as in-distribution and small-size
test sets, oversimplified metrics, unfair comparisons, and short-term outcome
pressure. As a consequence, good performance on standard benchmarks does not
guarantee success in real-world scenarios. To address these problems, we
present Touchstone, a large-scale collaborative segmentation benchmark of 9
types of abdominal organs. This benchmark is based on 5,195 training CT scans
from 76 hospitals around the world and 5,903 testing CT scans from 11
additional hospitals. This diverse test set enhances the statistical
significance of benchmark results and rigorously evaluates AI algorithms across
various out-of-distribution scenarios. We invited 14 inventors of 19 AI
algorithms to train their algorithms, while our team, as a third party,
independently evaluated these algorithms on three test sets. In addition, we
also evaluated pre-existing AI frameworks--which, differing from algorithms,
are more flexible and can support different algorithms--including MONAI from
NVIDIA, nnU-Net from DKFZ, and numerous other open-source frameworks. We are
committed to expanding this benchmark to encourage more innovation of AI
algorithms for the medical domain.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrEOgjAURbs4GPUDnHw_IBYBNS4GDUYHFyVhJA08aGNpTSlE_l4k7k53uCc5h5C5Sx1_FwR0xcxbtM7ad12HepstHZM81k3Ga6sVwhFVxitmnnsIDUKCoBVYjnAXJbeQsA4KbSBqmWyYFaqE8AqhLLURllf1cN4wFxmT8MCyQmV7TKvDlIwKJmuc_XZCFucoPl2WQ0_6MqK3dum3Kx26vP_EB-hdQtA</recordid><startdate>20241106</startdate><enddate>20241106</enddate><creator>Bassi, Pedro R. A. S</creator><creator>Li, Wenxuan</creator><creator>Tang, Yucheng</creator><creator>Isensee, Fabian</creator><creator>Wang, Zifu</creator><creator>Chen, Jieneng</creator><creator>Chou, Yu-Cheng</creator><creator>Kirchhoff, Yannick</creator><creator>Rokuss, Maximilian</creator><creator>Huang, Ziyan</creator><creator>Ye, Jin</creator><creator>He, Junjun</creator><creator>Wald, Tassilo</creator><creator>Ulrich, Constantin</creator><creator>Baumgartner, Michael</creator><creator>Roy, Saikat</creator><creator>Maier-Hein, Klaus H</creator><creator>Jaeger, Paul</creator><creator>Ye, Yiwen</creator><creator>Xie, Yutong</creator><creator>Zhang, Jianpeng</creator><creator>Chen, Ziyang</creator><creator>Xia, Yong</creator><creator>Xing, Zhaohu</creator><creator>Zhu, Lei</creator><creator>Sadegheih, Yousef</creator><creator>Bozorgpour, Afshin</creator><creator>Kumari, Pratibha</creator><creator>Azad, Reza</creator><creator>Merhof, Dorit</creator><creator>Shi, Pengcheng</creator><creator>Ma, Ting</creator><creator>Du, Yuxin</creator><creator>Bai, Fan</creator><creator>Huang, Tiejun</creator><creator>Zhao, Bo</creator><creator>Wang, Haonan</creator><creator>Li, Xiaomeng</creator><creator>Gu, Hanxue</creator><creator>Dong, Haoyu</creator><creator>Yang, Jichen</creator><creator>Mazurowski, Maciej A</creator><creator>Gupta, Saumya</creator><creator>Wu, Linshan</creator><creator>Zhuang, Jiaxin</creator><creator>Chen, Hao</creator><creator>Roth, Holger</creator><creator>Xu, Daguang</creator><creator>Blaschko, Matthew B</creator><creator>Decherchi, Sergio</creator><creator>Cavalli, Andrea</creator><creator>Yuille, Alan L</creator><creator>Zhou, Zongwei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241106</creationdate><title>Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation?</title><author>Bassi, Pedro R. A. S ; Li, Wenxuan ; Tang, Yucheng ; Isensee, Fabian ; Wang, Zifu ; Chen, Jieneng ; Chou, Yu-Cheng ; Kirchhoff, Yannick ; Rokuss, Maximilian ; Huang, Ziyan ; Ye, Jin ; He, Junjun ; Wald, Tassilo ; Ulrich, Constantin ; Baumgartner, Michael ; Roy, Saikat ; Maier-Hein, Klaus H ; Jaeger, Paul ; Ye, Yiwen ; Xie, Yutong ; Zhang, Jianpeng ; Chen, Ziyang ; Xia, Yong ; Xing, Zhaohu ; Zhu, Lei ; Sadegheih, Yousef ; Bozorgpour, Afshin ; Kumari, Pratibha ; Azad, Reza ; Merhof, Dorit ; Shi, Pengcheng ; Ma, Ting ; Du, Yuxin ; Bai, Fan ; Huang, Tiejun ; Zhao, Bo ; Wang, Haonan ; Li, Xiaomeng ; Gu, Hanxue ; Dong, Haoyu ; Yang, Jichen ; Mazurowski, Maciej A ; Gupta, Saumya ; Wu, Linshan ; Zhuang, Jiaxin ; Chen, Hao ; Roth, Holger ; Xu, Daguang ; Blaschko, Matthew B ; Decherchi, Sergio ; Cavalli, Andrea ; Yuille, Alan L ; Zhou, Zongwei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_036703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Bassi, Pedro R. A. S</creatorcontrib><creatorcontrib>Li, Wenxuan</creatorcontrib><creatorcontrib>Tang, Yucheng</creatorcontrib><creatorcontrib>Isensee, Fabian</creatorcontrib><creatorcontrib>Wang, Zifu</creatorcontrib><creatorcontrib>Chen, Jieneng</creatorcontrib><creatorcontrib>Chou, Yu-Cheng</creatorcontrib><creatorcontrib>Kirchhoff, Yannick</creatorcontrib><creatorcontrib>Rokuss, Maximilian</creatorcontrib><creatorcontrib>Huang, Ziyan</creatorcontrib><creatorcontrib>Ye, Jin</creatorcontrib><creatorcontrib>He, Junjun</creatorcontrib><creatorcontrib>Wald, Tassilo</creatorcontrib><creatorcontrib>Ulrich, Constantin</creatorcontrib><creatorcontrib>Baumgartner, Michael</creatorcontrib><creatorcontrib>Roy, Saikat</creatorcontrib><creatorcontrib>Maier-Hein, Klaus H</creatorcontrib><creatorcontrib>Jaeger, Paul</creatorcontrib><creatorcontrib>Ye, Yiwen</creatorcontrib><creatorcontrib>Xie, Yutong</creatorcontrib><creatorcontrib>Zhang, Jianpeng</creatorcontrib><creatorcontrib>Chen, Ziyang</creatorcontrib><creatorcontrib>Xia, Yong</creatorcontrib><creatorcontrib>Xing, Zhaohu</creatorcontrib><creatorcontrib>Zhu, Lei</creatorcontrib><creatorcontrib>Sadegheih, Yousef</creatorcontrib><creatorcontrib>Bozorgpour, Afshin</creatorcontrib><creatorcontrib>Kumari, Pratibha</creatorcontrib><creatorcontrib>Azad, Reza</creatorcontrib><creatorcontrib>Merhof, Dorit</creatorcontrib><creatorcontrib>Shi, Pengcheng</creatorcontrib><creatorcontrib>Ma, Ting</creatorcontrib><creatorcontrib>Du, Yuxin</creatorcontrib><creatorcontrib>Bai, Fan</creatorcontrib><creatorcontrib>Huang, Tiejun</creatorcontrib><creatorcontrib>Zhao, Bo</creatorcontrib><creatorcontrib>Wang, Haonan</creatorcontrib><creatorcontrib>Li, Xiaomeng</creatorcontrib><creatorcontrib>Gu, Hanxue</creatorcontrib><creatorcontrib>Dong, Haoyu</creatorcontrib><creatorcontrib>Yang, Jichen</creatorcontrib><creatorcontrib>Mazurowski, Maciej A</creatorcontrib><creatorcontrib>Gupta, Saumya</creatorcontrib><creatorcontrib>Wu, Linshan</creatorcontrib><creatorcontrib>Zhuang, Jiaxin</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><creatorcontrib>Roth, Holger</creatorcontrib><creatorcontrib>Xu, Daguang</creatorcontrib><creatorcontrib>Blaschko, Matthew B</creatorcontrib><creatorcontrib>Decherchi, Sergio</creatorcontrib><creatorcontrib>Cavalli, Andrea</creatorcontrib><creatorcontrib>Yuille, Alan L</creatorcontrib><creatorcontrib>Zhou, Zongwei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bassi, Pedro R. A. S</au><au>Li, Wenxuan</au><au>Tang, Yucheng</au><au>Isensee, Fabian</au><au>Wang, Zifu</au><au>Chen, Jieneng</au><au>Chou, Yu-Cheng</au><au>Kirchhoff, Yannick</au><au>Rokuss, Maximilian</au><au>Huang, Ziyan</au><au>Ye, Jin</au><au>He, Junjun</au><au>Wald, Tassilo</au><au>Ulrich, Constantin</au><au>Baumgartner, Michael</au><au>Roy, Saikat</au><au>Maier-Hein, Klaus H</au><au>Jaeger, Paul</au><au>Ye, Yiwen</au><au>Xie, Yutong</au><au>Zhang, Jianpeng</au><au>Chen, Ziyang</au><au>Xia, Yong</au><au>Xing, Zhaohu</au><au>Zhu, Lei</au><au>Sadegheih, Yousef</au><au>Bozorgpour, Afshin</au><au>Kumari, Pratibha</au><au>Azad, Reza</au><au>Merhof, Dorit</au><au>Shi, Pengcheng</au><au>Ma, Ting</au><au>Du, Yuxin</au><au>Bai, Fan</au><au>Huang, Tiejun</au><au>Zhao, Bo</au><au>Wang, Haonan</au><au>Li, Xiaomeng</au><au>Gu, Hanxue</au><au>Dong, Haoyu</au><au>Yang, Jichen</au><au>Mazurowski, Maciej A</au><au>Gupta, Saumya</au><au>Wu, Linshan</au><au>Zhuang, Jiaxin</au><au>Chen, Hao</au><au>Roth, Holger</au><au>Xu, Daguang</au><au>Blaschko, Matthew B</au><au>Decherchi, Sergio</au><au>Cavalli, Andrea</au><au>Yuille, Alan L</au><au>Zhou, Zongwei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation?</atitle><date>2024-11-06</date><risdate>2024</risdate><abstract>How can we test AI performance? This question seems trivial, but it isn't.
Standard benchmarks often have problems such as in-distribution and small-size
test sets, oversimplified metrics, unfair comparisons, and short-term outcome
pressure. As a consequence, good performance on standard benchmarks does not
guarantee success in real-world scenarios. To address these problems, we
present Touchstone, a large-scale collaborative segmentation benchmark of 9
types of abdominal organs. This benchmark is based on 5,195 training CT scans
from 76 hospitals around the world and 5,903 testing CT scans from 11
additional hospitals. This diverse test set enhances the statistical
significance of benchmark results and rigorously evaluates AI algorithms across
various out-of-distribution scenarios. We invited 14 inventors of 19 AI
algorithms to train their algorithms, while our team, as a third party,
independently evaluated these algorithms on three test sets. In addition, we
also evaluated pre-existing AI frameworks--which, differing from algorithms,
are more flexible and can support different algorithms--including MONAI from
NVIDIA, nnU-Net from DKFZ, and numerous other open-source frameworks. We are
committed to expanding this benchmark to encourage more innovation of AI
algorithms for the medical domain.</abstract><doi>10.48550/arxiv.2411.03670</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2411.03670 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2411_03670 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
title | Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation? |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T07%3A17%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Touchstone%20Benchmark:%20Are%20We%20on%20the%20Right%20Way%20for%20Evaluating%20AI%20Algorithms%20for%20Medical%20Segmentation?&rft.au=Bassi,%20Pedro%20R.%20A.%20S&rft.date=2024-11-06&rft_id=info:doi/10.48550/arxiv.2411.03670&rft_dat=%3Carxiv_GOX%3E2411_03670%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |