Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks

Purpose Quantitative bone single‐photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical physics (Lancaster) 2021-07, Vol.48 (7), p.3860-3877
Hauptverfasser: Chen, Junyu, Li, Ye, Luna, Licia P., Chung, Hyun W., Rowe, Steven P., Du, Yong, Solnes, Lilja B., Frey, Eric C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3877
container_issue 7
container_start_page 3860
container_title Medical physics (Lancaster)
container_volume 48
creator Chen, Junyu
Li, Ye
Luna, Licia P.
Chung, Hyun W.
Rowe, Steven P.
Du, Yong
Solnes, Lilja B.
Frey, Eric C.
description Purpose Quantitative bone single‐photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions‐of‐interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background. Methods We present a new unsupervised segmentation loss function and its semi‐ and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C‐means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning‐based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi‐supervised segmentation using deep learning neural network. Experiments and results We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross‐entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model’s accuracy was greater than the conventional clustering methods while reducing computation time by 200‐fold. For the clinical QBSPECT/CT, the proposed semi‐supervised ConvNet model, trained using simulated images, produced DSCs of 0.75 and 0.74 for lesion and bone segmentation in SPECT, and a DSC of 0.79 bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by >0.4 for SPECT
doi_str_mv 10.1002/mp.14903
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9973404</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2519326637</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4493-3a1a766028897ba364958dd450e8462f355df2e4e4350037d7e123b3e954d1263</originalsourceid><addsrcrecordid>eNp1kU1Lw0AQhhdRtFbBXyA5eolO9ivdiyClfkBFwXpetsmkRpNs3U1a2l9v-mHVg6dhZx-eeZkh5CyCywiAXpXTy4grYHukQ3nMQk5B7ZMOgOIh5SCOyLH37wAgmYBDcsSYAiEkdMjrEI2r8moSZM1yuQiSovE1unXDuuDledAfXfVHgcdJiVVt6txWwSw3QWKrmS2a1dsUQYWNW5d6bt2HPyEHmSk8nm5rl7zeDkb9-3D4dPfQvxmGCeeKhcxEJpYSaK-n4rFhkivRS1MuAHtc0owJkWYUOfI2NrA4jTGibMxQCZ5GVLIuud54p824xDRpE7Yx9NTlpXELbU2u__5U-Zue2JlWKmYceCu42Aqc_WzQ17rMfYJFYSq0jddURIpRKVn8gybOeu8w242JQK-uoMupXl-hRc9_x9qB32tvgXADzPMCF_-K9OPzRvgFkS6QtQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2519326637</pqid></control><display><type>article</type><title>Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks</title><source>Wiley Online Library - AutoHoldings Journals</source><source>MEDLINE</source><source>Alma/SFX Local Collection</source><creator>Chen, Junyu ; Li, Ye ; Luna, Licia P. ; Chung, Hyun W. ; Rowe, Steven P. ; Du, Yong ; Solnes, Lilja B. ; Frey, Eric C.</creator><creatorcontrib>Chen, Junyu ; Li, Ye ; Luna, Licia P. ; Chung, Hyun W. ; Rowe, Steven P. ; Du, Yong ; Solnes, Lilja B. ; Frey, Eric C.</creatorcontrib><description>Purpose Quantitative bone single‐photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions‐of‐interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background. Methods We present a new unsupervised segmentation loss function and its semi‐ and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C‐means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning‐based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi‐supervised segmentation using deep learning neural network. Experiments and results We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross‐entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model’s accuracy was greater than the conventional clustering methods while reducing computation time by 200‐fold. For the clinical QBSPECT/CT, the proposed semi‐supervised ConvNet model, trained using simulated images, produced DSCs of 0.75 and 0.74 for lesion and bone segmentation in SPECT, and a DSC of 0.79 bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by &gt;0.4 for SPECT segmentation, and &gt;0.07 for CT segmentation with P‐values &lt;0.001 from a paired t‐test. Conclusions A ConvNet‐based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi‐supervised, or fully‐supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.</description><identifier>ISSN: 0094-2405</identifier><identifier>EISSN: 2473-4209</identifier><identifier>DOI: 10.1002/mp.14903</identifier><identifier>PMID: 33905560</identifier><language>eng</language><publisher>United States</publisher><subject>Cluster Analysis ; convolutional neural networks ; fuzzy C‐means ; Image Processing, Computer-Assisted ; image segmentation ; Neural Networks, Computer ; nuclear medicine ; Single Photon Emission Computed Tomography Computed Tomography ; Tomography, X-Ray Computed</subject><ispartof>Medical physics (Lancaster), 2021-07, Vol.48 (7), p.3860-3877</ispartof><rights>2021 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4493-3a1a766028897ba364958dd450e8462f355df2e4e4350037d7e123b3e954d1263</citedby><cites>FETCH-LOGICAL-c4493-3a1a766028897ba364958dd450e8462f355df2e4e4350037d7e123b3e954d1263</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fmp.14903$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fmp.14903$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>230,314,780,784,885,1417,27924,27925,45574,45575</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33905560$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Junyu</creatorcontrib><creatorcontrib>Li, Ye</creatorcontrib><creatorcontrib>Luna, Licia P.</creatorcontrib><creatorcontrib>Chung, Hyun W.</creatorcontrib><creatorcontrib>Rowe, Steven P.</creatorcontrib><creatorcontrib>Du, Yong</creatorcontrib><creatorcontrib>Solnes, Lilja B.</creatorcontrib><creatorcontrib>Frey, Eric C.</creatorcontrib><title>Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks</title><title>Medical physics (Lancaster)</title><addtitle>Med Phys</addtitle><description>Purpose Quantitative bone single‐photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions‐of‐interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background. Methods We present a new unsupervised segmentation loss function and its semi‐ and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C‐means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning‐based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi‐supervised segmentation using deep learning neural network. Experiments and results We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross‐entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model’s accuracy was greater than the conventional clustering methods while reducing computation time by 200‐fold. For the clinical QBSPECT/CT, the proposed semi‐supervised ConvNet model, trained using simulated images, produced DSCs of 0.75 and 0.74 for lesion and bone segmentation in SPECT, and a DSC of 0.79 bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by &gt;0.4 for SPECT segmentation, and &gt;0.07 for CT segmentation with P‐values &lt;0.001 from a paired t‐test. Conclusions A ConvNet‐based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi‐supervised, or fully‐supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.</description><subject>Cluster Analysis</subject><subject>convolutional neural networks</subject><subject>fuzzy C‐means</subject><subject>Image Processing, Computer-Assisted</subject><subject>image segmentation</subject><subject>Neural Networks, Computer</subject><subject>nuclear medicine</subject><subject>Single Photon Emission Computed Tomography Computed Tomography</subject><subject>Tomography, X-Ray Computed</subject><issn>0094-2405</issn><issn>2473-4209</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>WIN</sourceid><sourceid>EIF</sourceid><recordid>eNp1kU1Lw0AQhhdRtFbBXyA5eolO9ivdiyClfkBFwXpetsmkRpNs3U1a2l9v-mHVg6dhZx-eeZkh5CyCywiAXpXTy4grYHukQ3nMQk5B7ZMOgOIh5SCOyLH37wAgmYBDcsSYAiEkdMjrEI2r8moSZM1yuQiSovE1unXDuuDledAfXfVHgcdJiVVt6txWwSw3QWKrmS2a1dsUQYWNW5d6bt2HPyEHmSk8nm5rl7zeDkb9-3D4dPfQvxmGCeeKhcxEJpYSaK-n4rFhkivRS1MuAHtc0owJkWYUOfI2NrA4jTGibMxQCZ5GVLIuud54p824xDRpE7Yx9NTlpXELbU2u__5U-Zue2JlWKmYceCu42Aqc_WzQ17rMfYJFYSq0jddURIpRKVn8gybOeu8w242JQK-uoMupXl-hRc9_x9qB32tvgXADzPMCF_-K9OPzRvgFkS6QtQ</recordid><startdate>202107</startdate><enddate>202107</enddate><creator>Chen, Junyu</creator><creator>Li, Ye</creator><creator>Luna, Licia P.</creator><creator>Chung, Hyun W.</creator><creator>Rowe, Steven P.</creator><creator>Du, Yong</creator><creator>Solnes, Lilja B.</creator><creator>Frey, Eric C.</creator><scope>24P</scope><scope>WIN</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>202107</creationdate><title>Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks</title><author>Chen, Junyu ; Li, Ye ; Luna, Licia P. ; Chung, Hyun W. ; Rowe, Steven P. ; Du, Yong ; Solnes, Lilja B. ; Frey, Eric C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4493-3a1a766028897ba364958dd450e8462f355df2e4e4350037d7e123b3e954d1263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Cluster Analysis</topic><topic>convolutional neural networks</topic><topic>fuzzy C‐means</topic><topic>Image Processing, Computer-Assisted</topic><topic>image segmentation</topic><topic>Neural Networks, Computer</topic><topic>nuclear medicine</topic><topic>Single Photon Emission Computed Tomography Computed Tomography</topic><topic>Tomography, X-Ray Computed</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Junyu</creatorcontrib><creatorcontrib>Li, Ye</creatorcontrib><creatorcontrib>Luna, Licia P.</creatorcontrib><creatorcontrib>Chung, Hyun W.</creatorcontrib><creatorcontrib>Rowe, Steven P.</creatorcontrib><creatorcontrib>Du, Yong</creatorcontrib><creatorcontrib>Solnes, Lilja B.</creatorcontrib><creatorcontrib>Frey, Eric C.</creatorcontrib><collection>Wiley Online Library (Open Access Collection)</collection><collection>Wiley Online Library Free Content</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Medical physics (Lancaster)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Junyu</au><au>Li, Ye</au><au>Luna, Licia P.</au><au>Chung, Hyun W.</au><au>Rowe, Steven P.</au><au>Du, Yong</au><au>Solnes, Lilja B.</au><au>Frey, Eric C.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks</atitle><jtitle>Medical physics (Lancaster)</jtitle><addtitle>Med Phys</addtitle><date>2021-07</date><risdate>2021</risdate><volume>48</volume><issue>7</issue><spage>3860</spage><epage>3877</epage><pages>3860-3877</pages><issn>0094-2405</issn><eissn>2473-4209</eissn><abstract>Purpose Quantitative bone single‐photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions‐of‐interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background. Methods We present a new unsupervised segmentation loss function and its semi‐ and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C‐means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning‐based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi‐supervised segmentation using deep learning neural network. Experiments and results We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross‐entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model’s accuracy was greater than the conventional clustering methods while reducing computation time by 200‐fold. For the clinical QBSPECT/CT, the proposed semi‐supervised ConvNet model, trained using simulated images, produced DSCs of 0.75 and 0.74 for lesion and bone segmentation in SPECT, and a DSC of 0.79 bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by &gt;0.4 for SPECT segmentation, and &gt;0.07 for CT segmentation with P‐values &lt;0.001 from a paired t‐test. Conclusions A ConvNet‐based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi‐supervised, or fully‐supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.</abstract><cop>United States</cop><pmid>33905560</pmid><doi>10.1002/mp.14903</doi><tpages>18</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0094-2405
ispartof Medical physics (Lancaster), 2021-07, Vol.48 (7), p.3860-3877
issn 0094-2405
2473-4209
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9973404
source Wiley Online Library - AutoHoldings Journals; MEDLINE; Alma/SFX Local Collection
subjects Cluster Analysis
convolutional neural networks
fuzzy C‐means
Image Processing, Computer-Assisted
image segmentation
Neural Networks, Computer
nuclear medicine
Single Photon Emission Computed Tomography Computed Tomography
Tomography, X-Ray Computed
title Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T02%3A12%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20fuzzy%20clustering%20for%20SPECT/CT%20segmentation%20via%20convolutional%20neural%20networks&rft.jtitle=Medical%20physics%20(Lancaster)&rft.au=Chen,%20Junyu&rft.date=2021-07&rft.volume=48&rft.issue=7&rft.spage=3860&rft.epage=3877&rft.pages=3860-3877&rft.issn=0094-2405&rft.eissn=2473-4209&rft_id=info:doi/10.1002/mp.14903&rft_dat=%3Cproquest_pubme%3E2519326637%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2519326637&rft_id=info:pmid/33905560&rfr_iscdi=true