Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network

Purpose In quantitative computed tomography (CT), manual selection of the intensity calibration phantom’s region of interest is necessary for calculating density (mg/cm 3 ) from the radiodensity values (Hounsfield units: HU). However, as this manual process requires effort and time, the purposes of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal for computer assisted radiology and surgery 2021-11, Vol.16 (11), p.1855-1864
Hauptverfasser: Uemura, Keisuke, Otake, Yoshito, Takao, Masaki, Soufi, Mazen, Kawasaki, Akihiro, Sugano, Nobuhiko, Sato, Yoshinobu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1864
container_issue 11
container_start_page 1855
container_title International journal for computer assisted radiology and surgery
container_volume 16
creator Uemura, Keisuke
Otake, Yoshito
Takao, Masaki
Soufi, Mazen
Kawasaki, Akihiro
Sugano, Nobuhiko
Sato, Yoshinobu
description Purpose In quantitative computed tomography (CT), manual selection of the intensity calibration phantom’s region of interest is necessary for calculating density (mg/cm 3 ) from the radiodensity values (Hounsfield units: HU). However, as this manual process requires effort and time, the purposes of this study were to develop a system that applies a convolutional neural network (CNN) to automatically segment intensity calibration phantom regions in CT images and to test the system in a large cohort to evaluate its robustness. Methods This cross-sectional, retrospective study included 1040 cases (520 each from two institutions) in which an intensity calibration phantom (B-MAS200, Kyoto Kagaku, Kyoto, Japan) was used. A training dataset was created by manually segmenting the phantom regions for 40 cases (20 cases for each institution). The CNN model’s segmentation accuracy was assessed with the Dice coefficient, and the average symmetric surface distance was assessed through fourfold cross-validation. Further, absolute difference of HU was compared between manually and automatically segmented regions. The system was tested on the remaining 1000 cases. For each institution, linear regression was applied to calculate the correlation coefficients between HU and phantom density. Results The source code and the model used for phantom segmentation can be accessed at https://github.com/keisuke-uemura/CT-Intensity-Calibration-Phantom-Segmentation . The median Dice coefficient was 0.977, and the median average symmetric surface distance was 0.116 mm. The median absolute difference of the segmented regions between manual and automated segmentation was 0.114 HU. For the test cases, the median correlation coefficients were 0.9998 and 0.999 for the two institutions, with a minimum value of 0.9863. Conclusion The proposed CNN model successfully segmented the calibration phantom regions in CT images with excellent accuracy.
doi_str_mv 10.1007/s11548-021-02345-w
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2502805813</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2502805813</sourcerecordid><originalsourceid>FETCH-LOGICAL-c441t-706f770aa34501197e2ab12aeffe008287a3692d3ace16fa07bda2f6247889b93</originalsourceid><addsrcrecordid>eNp9kUtP3DAUha0KVB7tH2BRWWLTTeDaTmJniUa0RUJiA2vrJuNMDYk99aMj_n09hFKpCxbWtXS-c-yrQ8gZgwsGIC8jY02tKuCsHFE31e4DOWaqZVVb8-7g7c7giJzE-AhQN1I0H8mREFKAaPgxyVc5-RmTWdNoNrNxCZP1jvqRoqPWJeOiTc90wMn2YdG2P9EVU1HpMFlni0ZX99TOuDGR5mjdhiIdvPvtp7x3FN2ZHF5G2vnw9IkcjjhF8_l1npKHb9f3qx_V7d33m9XVbTXUNUuVhHaUEhDLcsBYJw3HnnE042gAFFcSRdvxtcDBsHZEkP0a-djyWirV9Z04JV-X3G3wv7KJSc82Dmaa0Bmfo-YNcAWNYqKg5_-hjz6H8vU91bVdA63aB_KFGoKPMZhRb0PZOzxrBnpfil5K0aUU_VKK3hXTl9fo3M9m_Wb520IBxALEIrmNCf_efif2D1XUmTc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2596950689</pqid></control><display><type>article</type><title>Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network</title><source>Springer Nature - Complete Springer Journals</source><creator>Uemura, Keisuke ; Otake, Yoshito ; Takao, Masaki ; Soufi, Mazen ; Kawasaki, Akihiro ; Sugano, Nobuhiko ; Sato, Yoshinobu</creator><creatorcontrib>Uemura, Keisuke ; Otake, Yoshito ; Takao, Masaki ; Soufi, Mazen ; Kawasaki, Akihiro ; Sugano, Nobuhiko ; Sato, Yoshinobu</creatorcontrib><description>Purpose In quantitative computed tomography (CT), manual selection of the intensity calibration phantom’s region of interest is necessary for calculating density (mg/cm 3 ) from the radiodensity values (Hounsfield units: HU). However, as this manual process requires effort and time, the purposes of this study were to develop a system that applies a convolutional neural network (CNN) to automatically segment intensity calibration phantom regions in CT images and to test the system in a large cohort to evaluate its robustness. Methods This cross-sectional, retrospective study included 1040 cases (520 each from two institutions) in which an intensity calibration phantom (B-MAS200, Kyoto Kagaku, Kyoto, Japan) was used. A training dataset was created by manually segmenting the phantom regions for 40 cases (20 cases for each institution). The CNN model’s segmentation accuracy was assessed with the Dice coefficient, and the average symmetric surface distance was assessed through fourfold cross-validation. Further, absolute difference of HU was compared between manually and automatically segmented regions. The system was tested on the remaining 1000 cases. For each institution, linear regression was applied to calculate the correlation coefficients between HU and phantom density. Results The source code and the model used for phantom segmentation can be accessed at https://github.com/keisuke-uemura/CT-Intensity-Calibration-Phantom-Segmentation . The median Dice coefficient was 0.977, and the median average symmetric surface distance was 0.116 mm. The median absolute difference of the segmented regions between manual and automated segmentation was 0.114 HU. For the test cases, the median correlation coefficients were 0.9998 and 0.999 for the two institutions, with a minimum value of 0.9863. Conclusion The proposed CNN model successfully segmented the calibration phantom regions in CT images with excellent accuracy.</description><identifier>ISSN: 1861-6410</identifier><identifier>EISSN: 1861-6429</identifier><identifier>DOI: 10.1007/s11548-021-02345-w</identifier><identifier>PMID: 33730352</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Artificial neural networks ; Automation ; Calibration ; Computed tomography ; Computer Imaging ; Computer Science ; Correlation coefficients ; Density ; Health Informatics ; Image segmentation ; Imaging ; Mathematical models ; Medical imaging ; Medicine ; Medicine &amp; Public Health ; Model accuracy ; Neural networks ; Original Article ; Pattern Recognition and Graphics ; Radiology ; Source code ; Surgery ; Vision</subject><ispartof>International journal for computer assisted radiology and surgery, 2021-11, Vol.16 (11), p.1855-1864</ispartof><rights>CARS 2021</rights><rights>CARS 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c441t-706f770aa34501197e2ab12aeffe008287a3692d3ace16fa07bda2f6247889b93</citedby><cites>FETCH-LOGICAL-c441t-706f770aa34501197e2ab12aeffe008287a3692d3ace16fa07bda2f6247889b93</cites><orcidid>0000-0002-9245-1743</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11548-021-02345-w$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11548-021-02345-w$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33730352$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Uemura, Keisuke</creatorcontrib><creatorcontrib>Otake, Yoshito</creatorcontrib><creatorcontrib>Takao, Masaki</creatorcontrib><creatorcontrib>Soufi, Mazen</creatorcontrib><creatorcontrib>Kawasaki, Akihiro</creatorcontrib><creatorcontrib>Sugano, Nobuhiko</creatorcontrib><creatorcontrib>Sato, Yoshinobu</creatorcontrib><title>Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network</title><title>International journal for computer assisted radiology and surgery</title><addtitle>Int J CARS</addtitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><description>Purpose In quantitative computed tomography (CT), manual selection of the intensity calibration phantom’s region of interest is necessary for calculating density (mg/cm 3 ) from the radiodensity values (Hounsfield units: HU). However, as this manual process requires effort and time, the purposes of this study were to develop a system that applies a convolutional neural network (CNN) to automatically segment intensity calibration phantom regions in CT images and to test the system in a large cohort to evaluate its robustness. Methods This cross-sectional, retrospective study included 1040 cases (520 each from two institutions) in which an intensity calibration phantom (B-MAS200, Kyoto Kagaku, Kyoto, Japan) was used. A training dataset was created by manually segmenting the phantom regions for 40 cases (20 cases for each institution). The CNN model’s segmentation accuracy was assessed with the Dice coefficient, and the average symmetric surface distance was assessed through fourfold cross-validation. Further, absolute difference of HU was compared between manually and automatically segmented regions. The system was tested on the remaining 1000 cases. For each institution, linear regression was applied to calculate the correlation coefficients between HU and phantom density. Results The source code and the model used for phantom segmentation can be accessed at https://github.com/keisuke-uemura/CT-Intensity-Calibration-Phantom-Segmentation . The median Dice coefficient was 0.977, and the median average symmetric surface distance was 0.116 mm. The median absolute difference of the segmented regions between manual and automated segmentation was 0.114 HU. For the test cases, the median correlation coefficients were 0.9998 and 0.999 for the two institutions, with a minimum value of 0.9863. Conclusion The proposed CNN model successfully segmented the calibration phantom regions in CT images with excellent accuracy.</description><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Calibration</subject><subject>Computed tomography</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Correlation coefficients</subject><subject>Density</subject><subject>Health Informatics</subject><subject>Image segmentation</subject><subject>Imaging</subject><subject>Mathematical models</subject><subject>Medical imaging</subject><subject>Medicine</subject><subject>Medicine &amp; Public Health</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Original Article</subject><subject>Pattern Recognition and Graphics</subject><subject>Radiology</subject><subject>Source code</subject><subject>Surgery</subject><subject>Vision</subject><issn>1861-6410</issn><issn>1861-6429</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kUtP3DAUha0KVB7tH2BRWWLTTeDaTmJniUa0RUJiA2vrJuNMDYk99aMj_n09hFKpCxbWtXS-c-yrQ8gZgwsGIC8jY02tKuCsHFE31e4DOWaqZVVb8-7g7c7giJzE-AhQN1I0H8mREFKAaPgxyVc5-RmTWdNoNrNxCZP1jvqRoqPWJeOiTc90wMn2YdG2P9EVU1HpMFlni0ZX99TOuDGR5mjdhiIdvPvtp7x3FN2ZHF5G2vnw9IkcjjhF8_l1npKHb9f3qx_V7d33m9XVbTXUNUuVhHaUEhDLcsBYJw3HnnE042gAFFcSRdvxtcDBsHZEkP0a-djyWirV9Z04JV-X3G3wv7KJSc82Dmaa0Bmfo-YNcAWNYqKg5_-hjz6H8vU91bVdA63aB_KFGoKPMZhRb0PZOzxrBnpfil5K0aUU_VKK3hXTl9fo3M9m_Wb520IBxALEIrmNCf_efif2D1XUmTc</recordid><startdate>20211101</startdate><enddate>20211101</enddate><creator>Uemura, Keisuke</creator><creator>Otake, Yoshito</creator><creator>Takao, Masaki</creator><creator>Soufi, Mazen</creator><creator>Kawasaki, Akihiro</creator><creator>Sugano, Nobuhiko</creator><creator>Sato, Yoshinobu</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9245-1743</orcidid></search><sort><creationdate>20211101</creationdate><title>Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network</title><author>Uemura, Keisuke ; Otake, Yoshito ; Takao, Masaki ; Soufi, Mazen ; Kawasaki, Akihiro ; Sugano, Nobuhiko ; Sato, Yoshinobu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c441t-706f770aa34501197e2ab12aeffe008287a3692d3ace16fa07bda2f6247889b93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Calibration</topic><topic>Computed tomography</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Correlation coefficients</topic><topic>Density</topic><topic>Health Informatics</topic><topic>Image segmentation</topic><topic>Imaging</topic><topic>Mathematical models</topic><topic>Medical imaging</topic><topic>Medicine</topic><topic>Medicine &amp; Public Health</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Original Article</topic><topic>Pattern Recognition and Graphics</topic><topic>Radiology</topic><topic>Source code</topic><topic>Surgery</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Uemura, Keisuke</creatorcontrib><creatorcontrib>Otake, Yoshito</creatorcontrib><creatorcontrib>Takao, Masaki</creatorcontrib><creatorcontrib>Soufi, Mazen</creatorcontrib><creatorcontrib>Kawasaki, Akihiro</creatorcontrib><creatorcontrib>Sugano, Nobuhiko</creatorcontrib><creatorcontrib>Sato, Yoshinobu</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>International journal for computer assisted radiology and surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Uemura, Keisuke</au><au>Otake, Yoshito</au><au>Takao, Masaki</au><au>Soufi, Mazen</au><au>Kawasaki, Akihiro</au><au>Sugano, Nobuhiko</au><au>Sato, Yoshinobu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network</atitle><jtitle>International journal for computer assisted radiology and surgery</jtitle><stitle>Int J CARS</stitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><date>2021-11-01</date><risdate>2021</risdate><volume>16</volume><issue>11</issue><spage>1855</spage><epage>1864</epage><pages>1855-1864</pages><issn>1861-6410</issn><eissn>1861-6429</eissn><abstract>Purpose In quantitative computed tomography (CT), manual selection of the intensity calibration phantom’s region of interest is necessary for calculating density (mg/cm 3 ) from the radiodensity values (Hounsfield units: HU). However, as this manual process requires effort and time, the purposes of this study were to develop a system that applies a convolutional neural network (CNN) to automatically segment intensity calibration phantom regions in CT images and to test the system in a large cohort to evaluate its robustness. Methods This cross-sectional, retrospective study included 1040 cases (520 each from two institutions) in which an intensity calibration phantom (B-MAS200, Kyoto Kagaku, Kyoto, Japan) was used. A training dataset was created by manually segmenting the phantom regions for 40 cases (20 cases for each institution). The CNN model’s segmentation accuracy was assessed with the Dice coefficient, and the average symmetric surface distance was assessed through fourfold cross-validation. Further, absolute difference of HU was compared between manually and automatically segmented regions. The system was tested on the remaining 1000 cases. For each institution, linear regression was applied to calculate the correlation coefficients between HU and phantom density. Results The source code and the model used for phantom segmentation can be accessed at https://github.com/keisuke-uemura/CT-Intensity-Calibration-Phantom-Segmentation . The median Dice coefficient was 0.977, and the median average symmetric surface distance was 0.116 mm. The median absolute difference of the segmented regions between manual and automated segmentation was 0.114 HU. For the test cases, the median correlation coefficients were 0.9998 and 0.999 for the two institutions, with a minimum value of 0.9863. Conclusion The proposed CNN model successfully segmented the calibration phantom regions in CT images with excellent accuracy.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>33730352</pmid><doi>10.1007/s11548-021-02345-w</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-9245-1743</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1861-6410
ispartof International journal for computer assisted radiology and surgery, 2021-11, Vol.16 (11), p.1855-1864
issn 1861-6410
1861-6429
language eng
recordid cdi_proquest_miscellaneous_2502805813
source Springer Nature - Complete Springer Journals
subjects Artificial neural networks
Automation
Calibration
Computed tomography
Computer Imaging
Computer Science
Correlation coefficients
Density
Health Informatics
Image segmentation
Imaging
Mathematical models
Medical imaging
Medicine
Medicine & Public Health
Model accuracy
Neural networks
Original Article
Pattern Recognition and Graphics
Radiology
Source code
Surgery
Vision
title Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T18%3A53%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automated%20segmentation%20of%20an%20intensity%20calibration%20phantom%20in%20clinical%20CT%20images%20using%20a%20convolutional%20neural%20network&rft.jtitle=International%20journal%20for%20computer%20assisted%20radiology%20and%20surgery&rft.au=Uemura,%20Keisuke&rft.date=2021-11-01&rft.volume=16&rft.issue=11&rft.spage=1855&rft.epage=1864&rft.pages=1855-1864&rft.issn=1861-6410&rft.eissn=1861-6429&rft_id=info:doi/10.1007/s11548-021-02345-w&rft_dat=%3Cproquest_cross%3E2502805813%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2596950689&rft_id=info:pmid/33730352&rfr_iscdi=true