DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images

Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-05
Hauptverfasser: Diaz-Pinto, Andres, Mehta, Pritesh, Alle, Sachidanand, Asad, Muhammad, Brown, Richard, Nath, Vishwesh, Ihsani, Alvin, Antonelli, Michela, Palkovics, Daniel, Pinter, Csaba, Alkalay, Ron, Pieper, Steve, Roth, Holger R, Xu, Daguang, Dogra, Prerna, Vercauteren, Tom, Feng, Andrew, Abood Quraini, Ourselin, Sebastien, Cardoso, M Jorge
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Diaz-Pinto, Andres
Mehta, Pritesh
Alle, Sachidanand
Asad, Muhammad
Brown, Richard
Nath, Vishwesh
Ihsani, Alvin
Antonelli, Michela
Palkovics, Daniel
Pinter, Csaba
Alkalay, Ron
Pieper, Steve
Roth, Holger R
Xu, Daguang
Dogra, Prerna
Vercauteren, Tom
Feng, Andrew
Abood Quraini
Ourselin, Sebastien
Cardoso, M Jorge
description Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel
doi_str_mv 10.48550/arxiv.2305.10655
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2305_10655</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2815842658</sourcerecordid><originalsourceid>FETCH-LOGICAL-a955-b8d6948a9a326c0b7072b9ecc3f537c1952d23bd9ec88ef27e2a3352f66f088a3</originalsourceid><addsrcrecordid>eNotkEFLw0AQhRdBsNT-AE8ueE7dzGY2G2_SVi1UBO09TJLZsKVN6iYt-u9NW0_zGB6P9z0h7mI1TSyieqTw449T0AqnsTKIV2IEWseRTQBuxKTrNkopMCkg6pH4nDPvF5Xvn-RJyZOkYstyxRQa39TStUEum54Dlb0_svziesdNT71vG9k6qefynStf0lYud1RzdyuuHW07nvzfsVi_LNazt2j18bqcPa8iyhCjwlYmSyxlpMGUqkhVCkXGZakd6rSMM4QKdFENL2vZQcpAWiM4Y5yylvRY3F9iz8D5Pvgdhd_8BJ6fwQfHw8WxD-33gbs-37SH0AydcrAxDoMYtPoPJHlaHQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2815842658</pqid></control><display><type>article</type><title>DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Diaz-Pinto, Andres ; Mehta, Pritesh ; Alle, Sachidanand ; Asad, Muhammad ; Brown, Richard ; Nath, Vishwesh ; Ihsani, Alvin ; Antonelli, Michela ; Palkovics, Daniel ; Pinter, Csaba ; Alkalay, Ron ; Pieper, Steve ; Roth, Holger R ; Xu, Daguang ; Dogra, Prerna ; Vercauteren, Tom ; Feng, Andrew ; Abood Quraini ; Ourselin, Sebastien ; Cardoso, M Jorge</creator><creatorcontrib>Diaz-Pinto, Andres ; Mehta, Pritesh ; Alle, Sachidanand ; Asad, Muhammad ; Brown, Richard ; Nath, Vishwesh ; Ihsani, Alvin ; Antonelli, Michela ; Palkovics, Daniel ; Pinter, Csaba ; Alkalay, Ron ; Pieper, Steve ; Roth, Holger R ; Xu, Daguang ; Dogra, Prerna ; Vercauteren, Tom ; Feng, Andrew ; Abood Quraini ; Ourselin, Sebastien ; Cardoso, M Jorge</creatorcontrib><description>Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2305.10655</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer architecture ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Datasets ; Deep learning ; Image segmentation ; Machine learning ; Medical imaging ; Source code ; Training ; Uncertainty</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27924</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.10655$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1007/978-3-031-17027-0_2$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Diaz-Pinto, Andres</creatorcontrib><creatorcontrib>Mehta, Pritesh</creatorcontrib><creatorcontrib>Alle, Sachidanand</creatorcontrib><creatorcontrib>Asad, Muhammad</creatorcontrib><creatorcontrib>Brown, Richard</creatorcontrib><creatorcontrib>Nath, Vishwesh</creatorcontrib><creatorcontrib>Ihsani, Alvin</creatorcontrib><creatorcontrib>Antonelli, Michela</creatorcontrib><creatorcontrib>Palkovics, Daniel</creatorcontrib><creatorcontrib>Pinter, Csaba</creatorcontrib><creatorcontrib>Alkalay, Ron</creatorcontrib><creatorcontrib>Pieper, Steve</creatorcontrib><creatorcontrib>Roth, Holger R</creatorcontrib><creatorcontrib>Xu, Daguang</creatorcontrib><creatorcontrib>Dogra, Prerna</creatorcontrib><creatorcontrib>Vercauteren, Tom</creatorcontrib><creatorcontrib>Feng, Andrew</creatorcontrib><creatorcontrib>Abood Quraini</creatorcontrib><creatorcontrib>Ourselin, Sebastien</creatorcontrib><creatorcontrib>Cardoso, M Jorge</creatorcontrib><title>DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images</title><title>arXiv.org</title><description>Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel</description><subject>Algorithms</subject><subject>Computer architecture</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Medical imaging</subject><subject>Source code</subject><subject>Training</subject><subject>Uncertainty</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkEFLw0AQhRdBsNT-AE8ueE7dzGY2G2_SVi1UBO09TJLZsKVN6iYt-u9NW0_zGB6P9z0h7mI1TSyieqTw449T0AqnsTKIV2IEWseRTQBuxKTrNkopMCkg6pH4nDPvF5Xvn-RJyZOkYstyxRQa39TStUEum54Dlb0_svziesdNT71vG9k6qefynStf0lYud1RzdyuuHW07nvzfsVi_LNazt2j18bqcPa8iyhCjwlYmSyxlpMGUqkhVCkXGZakd6rSMM4QKdFENL2vZQcpAWiM4Y5yylvRY3F9iz8D5Pvgdhd_8BJ6fwQfHw8WxD-33gbs-37SH0AydcrAxDoMYtPoPJHlaHQ</recordid><startdate>20230518</startdate><enddate>20230518</enddate><creator>Diaz-Pinto, Andres</creator><creator>Mehta, Pritesh</creator><creator>Alle, Sachidanand</creator><creator>Asad, Muhammad</creator><creator>Brown, Richard</creator><creator>Nath, Vishwesh</creator><creator>Ihsani, Alvin</creator><creator>Antonelli, Michela</creator><creator>Palkovics, Daniel</creator><creator>Pinter, Csaba</creator><creator>Alkalay, Ron</creator><creator>Pieper, Steve</creator><creator>Roth, Holger R</creator><creator>Xu, Daguang</creator><creator>Dogra, Prerna</creator><creator>Vercauteren, Tom</creator><creator>Feng, Andrew</creator><creator>Abood Quraini</creator><creator>Ourselin, Sebastien</creator><creator>Cardoso, M Jorge</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230518</creationdate><title>DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images</title><author>Diaz-Pinto, Andres ; Mehta, Pritesh ; Alle, Sachidanand ; Asad, Muhammad ; Brown, Richard ; Nath, Vishwesh ; Ihsani, Alvin ; Antonelli, Michela ; Palkovics, Daniel ; Pinter, Csaba ; Alkalay, Ron ; Pieper, Steve ; Roth, Holger R ; Xu, Daguang ; Dogra, Prerna ; Vercauteren, Tom ; Feng, Andrew ; Abood Quraini ; Ourselin, Sebastien ; Cardoso, M Jorge</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a955-b8d6948a9a326c0b7072b9ecc3f537c1952d23bd9ec88ef27e2a3352f66f088a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Computer architecture</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Medical imaging</topic><topic>Source code</topic><topic>Training</topic><topic>Uncertainty</topic><toplevel>online_resources</toplevel><creatorcontrib>Diaz-Pinto, Andres</creatorcontrib><creatorcontrib>Mehta, Pritesh</creatorcontrib><creatorcontrib>Alle, Sachidanand</creatorcontrib><creatorcontrib>Asad, Muhammad</creatorcontrib><creatorcontrib>Brown, Richard</creatorcontrib><creatorcontrib>Nath, Vishwesh</creatorcontrib><creatorcontrib>Ihsani, Alvin</creatorcontrib><creatorcontrib>Antonelli, Michela</creatorcontrib><creatorcontrib>Palkovics, Daniel</creatorcontrib><creatorcontrib>Pinter, Csaba</creatorcontrib><creatorcontrib>Alkalay, Ron</creatorcontrib><creatorcontrib>Pieper, Steve</creatorcontrib><creatorcontrib>Roth, Holger R</creatorcontrib><creatorcontrib>Xu, Daguang</creatorcontrib><creatorcontrib>Dogra, Prerna</creatorcontrib><creatorcontrib>Vercauteren, Tom</creatorcontrib><creatorcontrib>Feng, Andrew</creatorcontrib><creatorcontrib>Abood Quraini</creatorcontrib><creatorcontrib>Ourselin, Sebastien</creatorcontrib><creatorcontrib>Cardoso, M Jorge</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Diaz-Pinto, Andres</au><au>Mehta, Pritesh</au><au>Alle, Sachidanand</au><au>Asad, Muhammad</au><au>Brown, Richard</au><au>Nath, Vishwesh</au><au>Ihsani, Alvin</au><au>Antonelli, Michela</au><au>Palkovics, Daniel</au><au>Pinter, Csaba</au><au>Alkalay, Ron</au><au>Pieper, Steve</au><au>Roth, Holger R</au><au>Xu, Daguang</au><au>Dogra, Prerna</au><au>Vercauteren, Tom</au><au>Feng, Andrew</au><au>Abood Quraini</au><au>Ourselin, Sebastien</au><au>Cardoso, M Jorge</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images</atitle><jtitle>arXiv.org</jtitle><date>2023-05-18</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2305.10655</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-05
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2305_10655
source arXiv.org; Free E- Journals
subjects Algorithms
Computer architecture
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Datasets
Deep learning
Image segmentation
Machine learning
Medical imaging
Source code
Training
Uncertainty
title DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T07%3A36%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeepEdit:%20Deep%20Editable%20Learning%20for%20Interactive%20Segmentation%20of%203D%20Medical%20Images&rft.jtitle=arXiv.org&rft.au=Diaz-Pinto,%20Andres&rft.date=2023-05-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2305.10655&rft_dat=%3Cproquest_arxiv%3E2815842658%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2815842658&rft_id=info:pmid/&rfr_iscdi=true