Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images

Background and objective: Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Preoperative tumor localization, differential diagnosis, and subsequent selection of appropriate treatment for parotid gland tumors are critical. However, the relative rarity of these tumors a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Xu, Zi'an, Dai, Yin, Liu, Fayu, Li, Siqi, Liu, Sheng, Shi, Lifu, Fu, Jun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Xu, Zi'an
Dai, Yin
Liu, Fayu
Li, Siqi
Liu, Sheng
Shi, Lifu
Fu, Jun
description Background and objective: Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Preoperative tumor localization, differential diagnosis, and subsequent selection of appropriate treatment for parotid gland tumors are critical. However, the relative rarity of these tumors and the highly dispersed tissue types have left an unmet need for a subtle differential diagnosis of such neoplastic lesions based on preoperative radiomics. Recently, deep learning methods have developed rapidly, especially Transformer beats the traditional convolutional neural network in computer vision. Many new Transformer-based networks have been proposed for computer vision tasks. Methods: In this study, multicenter multimodal parotid gland MR images were collected. The Swin-Unet which was based on Transformer was used. MR images of short time inversion recovery, T1-weighted and T2-weighted modalities were combined into three-channel data to train the network. We achieved segmentation of the region of interest for parotid gland and tumor. Results: The Dice-Similarity Coefficient of the model on the test set was 88.63%, Mean Pixel Accuracy was 99.31%, Mean Intersection over Union was 83.99%, and Hausdorff Distance was 3.04. Then a series of comparison experiments were designed in this paper to further validate the segmentation performance of the algorithm. Conclusions: Experimental results showed that our method has good results for parotid gland and tumor segmentation. The Transformer-based network outperforms the traditional convolutional neural network in the field of medical images.
doi_str_mv 10.48550/arxiv.2206.03336
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_03336</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_03336</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-785cd648ff883259d38bc4b87f09068b4c3b5d84391c1e7b40b0554d49d0dcaa3</originalsourceid><addsrcrecordid>eNotz81OhDAUBeBuXJjRB3BlXwAs9IfLUieKJGM0zrgmt71l0gSKgfr39iq6OmdxcpKPsYtC5Aq0Flc4f4b3vCyFyYWU0pyy5gnnKQXizYCR-MNzy_f-OPqYMIUp8htcPPGfsv8IMXuJPvF19zakME6EA29HPPrljJ30OCz-_D837HB3e9jeZ7vHpt1e7zI0lckq0I6Mgr4HkKWuSYJ1ykLVi1oYsMpJqwmUrAtX-MoqYYXWilRNghyi3LDLv9tV0r3OYcT5q_sVdatIfgN-yUUc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images</title><source>arXiv.org</source><creator>Xu, Zi'an ; Dai, Yin ; Liu, Fayu ; Li, Siqi ; Liu, Sheng ; Shi, Lifu ; Fu, Jun</creator><creatorcontrib>Xu, Zi'an ; Dai, Yin ; Liu, Fayu ; Li, Siqi ; Liu, Sheng ; Shi, Lifu ; Fu, Jun</creatorcontrib><description>Background and objective: Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Preoperative tumor localization, differential diagnosis, and subsequent selection of appropriate treatment for parotid gland tumors are critical. However, the relative rarity of these tumors and the highly dispersed tissue types have left an unmet need for a subtle differential diagnosis of such neoplastic lesions based on preoperative radiomics. Recently, deep learning methods have developed rapidly, especially Transformer beats the traditional convolutional neural network in computer vision. Many new Transformer-based networks have been proposed for computer vision tasks. Methods: In this study, multicenter multimodal parotid gland MR images were collected. The Swin-Unet which was based on Transformer was used. MR images of short time inversion recovery, T1-weighted and T2-weighted modalities were combined into three-channel data to train the network. We achieved segmentation of the region of interest for parotid gland and tumor. Results: The Dice-Similarity Coefficient of the model on the test set was 88.63%, Mean Pixel Accuracy was 99.31%, Mean Intersection over Union was 83.99%, and Hausdorff Distance was 3.04. Then a series of comparison experiments were designed in this paper to further validate the segmentation performance of the algorithm. Conclusions: Experimental results showed that our method has good results for parotid gland and tumor segmentation. The Transformer-based network outperforms the traditional convolutional neural network in the field of medical images.</description><identifier>DOI: 10.48550/arxiv.2206.03336</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2022-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.03336$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.03336$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xu, Zi'an</creatorcontrib><creatorcontrib>Dai, Yin</creatorcontrib><creatorcontrib>Liu, Fayu</creatorcontrib><creatorcontrib>Li, Siqi</creatorcontrib><creatorcontrib>Liu, Sheng</creatorcontrib><creatorcontrib>Shi, Lifu</creatorcontrib><creatorcontrib>Fu, Jun</creatorcontrib><title>Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images</title><description>Background and objective: Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Preoperative tumor localization, differential diagnosis, and subsequent selection of appropriate treatment for parotid gland tumors are critical. However, the relative rarity of these tumors and the highly dispersed tissue types have left an unmet need for a subtle differential diagnosis of such neoplastic lesions based on preoperative radiomics. Recently, deep learning methods have developed rapidly, especially Transformer beats the traditional convolutional neural network in computer vision. Many new Transformer-based networks have been proposed for computer vision tasks. Methods: In this study, multicenter multimodal parotid gland MR images were collected. The Swin-Unet which was based on Transformer was used. MR images of short time inversion recovery, T1-weighted and T2-weighted modalities were combined into three-channel data to train the network. We achieved segmentation of the region of interest for parotid gland and tumor. Results: The Dice-Similarity Coefficient of the model on the test set was 88.63%, Mean Pixel Accuracy was 99.31%, Mean Intersection over Union was 83.99%, and Hausdorff Distance was 3.04. Then a series of comparison experiments were designed in this paper to further validate the segmentation performance of the algorithm. Conclusions: Experimental results showed that our method has good results for parotid gland and tumor segmentation. The Transformer-based network outperforms the traditional convolutional neural network in the field of medical images.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81OhDAUBeBuXJjRB3BlXwAs9IfLUieKJGM0zrgmt71l0gSKgfr39iq6OmdxcpKPsYtC5Aq0Flc4f4b3vCyFyYWU0pyy5gnnKQXizYCR-MNzy_f-OPqYMIUp8htcPPGfsv8IMXuJPvF19zakME6EA29HPPrljJ30OCz-_D837HB3e9jeZ7vHpt1e7zI0lckq0I6Mgr4HkKWuSYJ1ykLVi1oYsMpJqwmUrAtX-MoqYYXWilRNghyi3LDLv9tV0r3OYcT5q_sVdatIfgN-yUUc</recordid><startdate>20220607</startdate><enddate>20220607</enddate><creator>Xu, Zi'an</creator><creator>Dai, Yin</creator><creator>Liu, Fayu</creator><creator>Li, Siqi</creator><creator>Liu, Sheng</creator><creator>Shi, Lifu</creator><creator>Fu, Jun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220607</creationdate><title>Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images</title><author>Xu, Zi'an ; Dai, Yin ; Liu, Fayu ; Li, Siqi ; Liu, Sheng ; Shi, Lifu ; Fu, Jun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-785cd648ff883259d38bc4b87f09068b4c3b5d84391c1e7b40b0554d49d0dcaa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Zi'an</creatorcontrib><creatorcontrib>Dai, Yin</creatorcontrib><creatorcontrib>Liu, Fayu</creatorcontrib><creatorcontrib>Li, Siqi</creatorcontrib><creatorcontrib>Liu, Sheng</creatorcontrib><creatorcontrib>Shi, Lifu</creatorcontrib><creatorcontrib>Fu, Jun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xu, Zi'an</au><au>Dai, Yin</au><au>Liu, Fayu</au><au>Li, Siqi</au><au>Liu, Sheng</au><au>Shi, Lifu</au><au>Fu, Jun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images</atitle><date>2022-06-07</date><risdate>2022</risdate><abstract>Background and objective: Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Preoperative tumor localization, differential diagnosis, and subsequent selection of appropriate treatment for parotid gland tumors are critical. However, the relative rarity of these tumors and the highly dispersed tissue types have left an unmet need for a subtle differential diagnosis of such neoplastic lesions based on preoperative radiomics. Recently, deep learning methods have developed rapidly, especially Transformer beats the traditional convolutional neural network in computer vision. Many new Transformer-based networks have been proposed for computer vision tasks. Methods: In this study, multicenter multimodal parotid gland MR images were collected. The Swin-Unet which was based on Transformer was used. MR images of short time inversion recovery, T1-weighted and T2-weighted modalities were combined into three-channel data to train the network. We achieved segmentation of the region of interest for parotid gland and tumor. Results: The Dice-Similarity Coefficient of the model on the test set was 88.63%, Mean Pixel Accuracy was 99.31%, Mean Intersection over Union was 83.99%, and Hausdorff Distance was 3.04. Then a series of comparison experiments were designed in this paper to further validate the segmentation performance of the algorithm. Conclusions: Experimental results showed that our method has good results for parotid gland and tumor segmentation. The Transformer-based network outperforms the traditional convolutional neural network in the field of medical images.</abstract><doi>10.48550/arxiv.2206.03336</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2206.03336
ispartof
issn
language eng
recordid cdi_arxiv_primary_2206_03336
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T02%3A21%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Parotid%20Gland%20MRI%20Segmentation%20Based%20on%20Swin-Unet%20and%20Multimodal%20Images&rft.au=Xu,%20Zi'an&rft.date=2022-06-07&rft_id=info:doi/10.48550/arxiv.2206.03336&rft_dat=%3Carxiv_GOX%3E2206_03336%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true