Towards Automatic Abdominal MRI Organ Segmentation: Leveraging Synthesized Data Generated From CT Labels

Deep learning has shown great promise in the ability to automatically annotate organs in magnetic resonance imaging (MRI) scans, for example, of the brain. However, despite advancements in the field, the ability to accurately segment abdominal organs remains difficult across MR. In part, this may be...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ciausu, Cosmin, Krishnaswamy, Deepa, Billot, Benjamin, Pieper, Steve, Kikinis, Ron, Fedorov, Andrey
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ciausu, Cosmin
Krishnaswamy, Deepa
Billot, Benjamin
Pieper, Steve
Kikinis, Ron
Fedorov, Andrey
description Deep learning has shown great promise in the ability to automatically annotate organs in magnetic resonance imaging (MRI) scans, for example, of the brain. However, despite advancements in the field, the ability to accurately segment abdominal organs remains difficult across MR. In part, this may be explained by the much greater variability in image appearance and severely limited availability of training labels. The inherent nature of computed tomography (CT) scans makes it easier to annotate, resulting in a larger availability of expert annotations for the latter. We leverage a modality-agnostic domain randomization approach, utilizing CT label maps to generate synthetic images on-the-fly during training, further used to train a U-Net segmentation network for abdominal organs segmentation. Our approach shows comparable results compared to fully-supervised segmentation methods trained on MR data. Our method results in Dice scores of 0.90 (0.08) and 0.91 (0.08) for the right and left kidney respectively, compared to a pretrained nnU-Net model yielding 0.87 (0.20) and 0.91 (0.03). We will make our code publicly available.
doi_str_mv 10.48550/arxiv.2403.15609
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_15609</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_15609</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-a45fbffbe16c10436bd8abbda40e29c845db1d21a2ecb69716be94623aa251cc3</originalsourceid><addsrcrecordid>eNotj9FOgzAYRnvjhZk-gFf2BUBaSgfeEXRzCWaJ4578bX9YEyim1Ol8enF69eXLSU5yCLljSSzyLEsewH_ZU8xFksYsk0lxTY7N9AnezLT8CNMIwWpaKjON1sFAX992dO97cPSA_YguLHxyj7TGE3rorevp4ezCEWf7jYY-QQC6RbewsNyNn0ZaNbQGhcN8Q646GGa8_d8VaTbPTfUS1fvtrirrCOS6iEBkneo6hUxqlohUKpODUgZEgrzQuciMYoYz4KiVLNZMKiyE5CkAz5jW6Yrc_2kvqe27tyP4c_ub3F6S0x-7TFKw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Automatic Abdominal MRI Organ Segmentation: Leveraging Synthesized Data Generated From CT Labels</title><source>arXiv.org</source><creator>Ciausu, Cosmin ; Krishnaswamy, Deepa ; Billot, Benjamin ; Pieper, Steve ; Kikinis, Ron ; Fedorov, Andrey</creator><creatorcontrib>Ciausu, Cosmin ; Krishnaswamy, Deepa ; Billot, Benjamin ; Pieper, Steve ; Kikinis, Ron ; Fedorov, Andrey</creatorcontrib><description>Deep learning has shown great promise in the ability to automatically annotate organs in magnetic resonance imaging (MRI) scans, for example, of the brain. However, despite advancements in the field, the ability to accurately segment abdominal organs remains difficult across MR. In part, this may be explained by the much greater variability in image appearance and severely limited availability of training labels. The inherent nature of computed tomography (CT) scans makes it easier to annotate, resulting in a larger availability of expert annotations for the latter. We leverage a modality-agnostic domain randomization approach, utilizing CT label maps to generate synthetic images on-the-fly during training, further used to train a U-Net segmentation network for abdominal organs segmentation. Our approach shows comparable results compared to fully-supervised segmentation methods trained on MR data. Our method results in Dice scores of 0.90 (0.08) and 0.91 (0.08) for the right and left kidney respectively, compared to a pretrained nnU-Net model yielding 0.87 (0.20) and 0.91 (0.03). We will make our code publicly available.</description><identifier>DOI: 10.48550/arxiv.2403.15609</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.15609$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.15609$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ciausu, Cosmin</creatorcontrib><creatorcontrib>Krishnaswamy, Deepa</creatorcontrib><creatorcontrib>Billot, Benjamin</creatorcontrib><creatorcontrib>Pieper, Steve</creatorcontrib><creatorcontrib>Kikinis, Ron</creatorcontrib><creatorcontrib>Fedorov, Andrey</creatorcontrib><title>Towards Automatic Abdominal MRI Organ Segmentation: Leveraging Synthesized Data Generated From CT Labels</title><description>Deep learning has shown great promise in the ability to automatically annotate organs in magnetic resonance imaging (MRI) scans, for example, of the brain. However, despite advancements in the field, the ability to accurately segment abdominal organs remains difficult across MR. In part, this may be explained by the much greater variability in image appearance and severely limited availability of training labels. The inherent nature of computed tomography (CT) scans makes it easier to annotate, resulting in a larger availability of expert annotations for the latter. We leverage a modality-agnostic domain randomization approach, utilizing CT label maps to generate synthetic images on-the-fly during training, further used to train a U-Net segmentation network for abdominal organs segmentation. Our approach shows comparable results compared to fully-supervised segmentation methods trained on MR data. Our method results in Dice scores of 0.90 (0.08) and 0.91 (0.08) for the right and left kidney respectively, compared to a pretrained nnU-Net model yielding 0.87 (0.20) and 0.91 (0.03). We will make our code publicly available.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj9FOgzAYRnvjhZk-gFf2BUBaSgfeEXRzCWaJ4578bX9YEyim1Ol8enF69eXLSU5yCLljSSzyLEsewH_ZU8xFksYsk0lxTY7N9AnezLT8CNMIwWpaKjON1sFAX992dO97cPSA_YguLHxyj7TGE3rorevp4ezCEWf7jYY-QQC6RbewsNyNn0ZaNbQGhcN8Q646GGa8_d8VaTbPTfUS1fvtrirrCOS6iEBkneo6hUxqlohUKpODUgZEgrzQuciMYoYz4KiVLNZMKiyE5CkAz5jW6Yrc_2kvqe27tyP4c_ub3F6S0x-7TFKw</recordid><startdate>20240322</startdate><enddate>20240322</enddate><creator>Ciausu, Cosmin</creator><creator>Krishnaswamy, Deepa</creator><creator>Billot, Benjamin</creator><creator>Pieper, Steve</creator><creator>Kikinis, Ron</creator><creator>Fedorov, Andrey</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240322</creationdate><title>Towards Automatic Abdominal MRI Organ Segmentation: Leveraging Synthesized Data Generated From CT Labels</title><author>Ciausu, Cosmin ; Krishnaswamy, Deepa ; Billot, Benjamin ; Pieper, Steve ; Kikinis, Ron ; Fedorov, Andrey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-a45fbffbe16c10436bd8abbda40e29c845db1d21a2ecb69716be94623aa251cc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Ciausu, Cosmin</creatorcontrib><creatorcontrib>Krishnaswamy, Deepa</creatorcontrib><creatorcontrib>Billot, Benjamin</creatorcontrib><creatorcontrib>Pieper, Steve</creatorcontrib><creatorcontrib>Kikinis, Ron</creatorcontrib><creatorcontrib>Fedorov, Andrey</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ciausu, Cosmin</au><au>Krishnaswamy, Deepa</au><au>Billot, Benjamin</au><au>Pieper, Steve</au><au>Kikinis, Ron</au><au>Fedorov, Andrey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Automatic Abdominal MRI Organ Segmentation: Leveraging Synthesized Data Generated From CT Labels</atitle><date>2024-03-22</date><risdate>2024</risdate><abstract>Deep learning has shown great promise in the ability to automatically annotate organs in magnetic resonance imaging (MRI) scans, for example, of the brain. However, despite advancements in the field, the ability to accurately segment abdominal organs remains difficult across MR. In part, this may be explained by the much greater variability in image appearance and severely limited availability of training labels. The inherent nature of computed tomography (CT) scans makes it easier to annotate, resulting in a larger availability of expert annotations for the latter. We leverage a modality-agnostic domain randomization approach, utilizing CT label maps to generate synthetic images on-the-fly during training, further used to train a U-Net segmentation network for abdominal organs segmentation. Our approach shows comparable results compared to fully-supervised segmentation methods trained on MR data. Our method results in Dice scores of 0.90 (0.08) and 0.91 (0.08) for the right and left kidney respectively, compared to a pretrained nnU-Net model yielding 0.87 (0.20) and 0.91 (0.03). We will make our code publicly available.</abstract><doi>10.48550/arxiv.2403.15609</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.15609
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_15609
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Towards Automatic Abdominal MRI Organ Segmentation: Leveraging Synthesized Data Generated From CT Labels
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T21%3A16%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Automatic%20Abdominal%20MRI%20Organ%20Segmentation:%20Leveraging%20Synthesized%20Data%20Generated%20From%20CT%20Labels&rft.au=Ciausu,%20Cosmin&rft.date=2024-03-22&rft_id=info:doi/10.48550/arxiv.2403.15609&rft_dat=%3Carxiv_GOX%3E2403_15609%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true