A comparison of fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers

Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Physics in medicine & biology 2021-02
Hauptverfasser: Groendahl, Aurora Rosvoll, Skjei Knudtsen, Ingerid, Huynh, Bao Ngoc, Mulstad, Martine, Moe, Yngve Mardal Mardal, Knuth, Franziska, Tomic, Oliver, Indahl, Ulf Geir, Torheim, Turid, Dale, Einar, Malinen, Eirik, Futsaether, Cecilia Marie
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title Physics in medicine & biology
container_volume
creator Groendahl, Aurora Rosvoll
Skjei Knudtsen, Ingerid
Huynh, Bao Ngoc
Mulstad, Martine
Moe, Yngve Mardal Mardal
Knuth, Franziska
Tomic, Oliver
Indahl, Ulf Geir
Torheim, Turid
Dale, Einar
Malinen, Eirik
Futsaether, Cecilia Marie
description Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single vs. multimodality input on segmentation quality was also assesed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single-modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-valdiation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.
doi_str_mv 10.1088/1361-6560/abe553
format Article
fullrecord <record><control><sourceid>pubmed</sourceid><recordid>TN_cdi_pubmed_primary_33571978</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>33571978</sourcerecordid><originalsourceid>FETCH-pubmed_primary_335719783</originalsourceid><addsrcrecordid>eNqFjrtuwjAUhi2kCmjp3gmdF4DYuAlhRBFVxw7ZoxP7hKb1JbKTSLw90Asr03_7hp-xF8HXgud5ImQmVlma8QRrSlM5YfNbNWOPMX5xLkS-eZ2ymZTpVuy2-ZyZPShvOwxt9A58A81gzAlw6L3FvlUQ6WjJ9Rf_u_eD9SECOg2tG70ZSYPzmuIlwsehTIryin0S6h_IkfoGhU5RiAv20KCJ9PynT2z5diiL91U31JZ01YXWYjhV__fkXeAMeYVLYQ</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A comparison of fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</title><source>IOP Publishing Journals</source><source>Institute of Physics (IOP) Journals - HEAL-Link</source><creator>Groendahl, Aurora Rosvoll ; Skjei Knudtsen, Ingerid ; Huynh, Bao Ngoc ; Mulstad, Martine ; Moe, Yngve Mardal Mardal ; Knuth, Franziska ; Tomic, Oliver ; Indahl, Ulf Geir ; Torheim, Turid ; Dale, Einar ; Malinen, Eirik ; Futsaether, Cecilia Marie</creator><creatorcontrib>Groendahl, Aurora Rosvoll ; Skjei Knudtsen, Ingerid ; Huynh, Bao Ngoc ; Mulstad, Martine ; Moe, Yngve Mardal Mardal ; Knuth, Franziska ; Tomic, Oliver ; Indahl, Ulf Geir ; Torheim, Turid ; Dale, Einar ; Malinen, Eirik ; Futsaether, Cecilia Marie</creatorcontrib><description>Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single vs. multimodality input on segmentation quality was also assesed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single-modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-valdiation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.</description><identifier>EISSN: 1361-6560</identifier><identifier>DOI: 10.1088/1361-6560/abe553</identifier><identifier>PMID: 33571978</identifier><language>eng</language><publisher>England</publisher><ispartof>Physics in medicine &amp; biology, 2021-02</ispartof><rights>2021 Institute of Physics and Engineering in Medicine.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0001-7944-0719 ; 0000-0002-5159-9012 ; 0000-0001-5210-132X ; 0000-0001-6191-2036 ; 0000-0003-1327-3844 ; 0000-0002-6998-8681</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33571978$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Groendahl, Aurora Rosvoll</creatorcontrib><creatorcontrib>Skjei Knudtsen, Ingerid</creatorcontrib><creatorcontrib>Huynh, Bao Ngoc</creatorcontrib><creatorcontrib>Mulstad, Martine</creatorcontrib><creatorcontrib>Moe, Yngve Mardal Mardal</creatorcontrib><creatorcontrib>Knuth, Franziska</creatorcontrib><creatorcontrib>Tomic, Oliver</creatorcontrib><creatorcontrib>Indahl, Ulf Geir</creatorcontrib><creatorcontrib>Torheim, Turid</creatorcontrib><creatorcontrib>Dale, Einar</creatorcontrib><creatorcontrib>Malinen, Eirik</creatorcontrib><creatorcontrib>Futsaether, Cecilia Marie</creatorcontrib><title>A comparison of fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</title><title>Physics in medicine &amp; biology</title><addtitle>Phys Med Biol</addtitle><description>Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single vs. multimodality input on segmentation quality was also assesed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single-modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-valdiation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.</description><issn>1361-6560</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNqFjrtuwjAUhi2kCmjp3gmdF4DYuAlhRBFVxw7ZoxP7hKb1JbKTSLw90Asr03_7hp-xF8HXgud5ImQmVlma8QRrSlM5YfNbNWOPMX5xLkS-eZ2ymZTpVuy2-ZyZPShvOwxt9A58A81gzAlw6L3FvlUQ6WjJ9Rf_u_eD9SECOg2tG70ZSYPzmuIlwsehTIryin0S6h_IkfoGhU5RiAv20KCJ9PynT2z5diiL91U31JZ01YXWYjhV__fkXeAMeYVLYQ</recordid><startdate>20210211</startdate><enddate>20210211</enddate><creator>Groendahl, Aurora Rosvoll</creator><creator>Skjei Knudtsen, Ingerid</creator><creator>Huynh, Bao Ngoc</creator><creator>Mulstad, Martine</creator><creator>Moe, Yngve Mardal Mardal</creator><creator>Knuth, Franziska</creator><creator>Tomic, Oliver</creator><creator>Indahl, Ulf Geir</creator><creator>Torheim, Turid</creator><creator>Dale, Einar</creator><creator>Malinen, Eirik</creator><creator>Futsaether, Cecilia Marie</creator><scope>NPM</scope><orcidid>https://orcid.org/0000-0001-7944-0719</orcidid><orcidid>https://orcid.org/0000-0002-5159-9012</orcidid><orcidid>https://orcid.org/0000-0001-5210-132X</orcidid><orcidid>https://orcid.org/0000-0001-6191-2036</orcidid><orcidid>https://orcid.org/0000-0003-1327-3844</orcidid><orcidid>https://orcid.org/0000-0002-6998-8681</orcidid></search><sort><creationdate>20210211</creationdate><title>A comparison of fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</title><author>Groendahl, Aurora Rosvoll ; Skjei Knudtsen, Ingerid ; Huynh, Bao Ngoc ; Mulstad, Martine ; Moe, Yngve Mardal Mardal ; Knuth, Franziska ; Tomic, Oliver ; Indahl, Ulf Geir ; Torheim, Turid ; Dale, Einar ; Malinen, Eirik ; Futsaether, Cecilia Marie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-pubmed_primary_335719783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Groendahl, Aurora Rosvoll</creatorcontrib><creatorcontrib>Skjei Knudtsen, Ingerid</creatorcontrib><creatorcontrib>Huynh, Bao Ngoc</creatorcontrib><creatorcontrib>Mulstad, Martine</creatorcontrib><creatorcontrib>Moe, Yngve Mardal Mardal</creatorcontrib><creatorcontrib>Knuth, Franziska</creatorcontrib><creatorcontrib>Tomic, Oliver</creatorcontrib><creatorcontrib>Indahl, Ulf Geir</creatorcontrib><creatorcontrib>Torheim, Turid</creatorcontrib><creatorcontrib>Dale, Einar</creatorcontrib><creatorcontrib>Malinen, Eirik</creatorcontrib><creatorcontrib>Futsaether, Cecilia Marie</creatorcontrib><collection>PubMed</collection><jtitle>Physics in medicine &amp; biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Groendahl, Aurora Rosvoll</au><au>Skjei Knudtsen, Ingerid</au><au>Huynh, Bao Ngoc</au><au>Mulstad, Martine</au><au>Moe, Yngve Mardal Mardal</au><au>Knuth, Franziska</au><au>Tomic, Oliver</au><au>Indahl, Ulf Geir</au><au>Torheim, Turid</au><au>Dale, Einar</au><au>Malinen, Eirik</au><au>Futsaether, Cecilia Marie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A comparison of fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</atitle><jtitle>Physics in medicine &amp; biology</jtitle><addtitle>Phys Med Biol</addtitle><date>2021-02-11</date><risdate>2021</risdate><eissn>1361-6560</eissn><abstract>Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single vs. multimodality input on segmentation quality was also assesed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single-modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-valdiation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.</abstract><cop>England</cop><pmid>33571978</pmid><doi>10.1088/1361-6560/abe553</doi><orcidid>https://orcid.org/0000-0001-7944-0719</orcidid><orcidid>https://orcid.org/0000-0002-5159-9012</orcidid><orcidid>https://orcid.org/0000-0001-5210-132X</orcidid><orcidid>https://orcid.org/0000-0001-6191-2036</orcidid><orcidid>https://orcid.org/0000-0003-1327-3844</orcidid><orcidid>https://orcid.org/0000-0002-6998-8681</orcidid></addata></record>
fulltext fulltext
identifier EISSN: 1361-6560
ispartof Physics in medicine & biology, 2021-02
issn 1361-6560
language eng
recordid cdi_pubmed_primary_33571978
source IOP Publishing Journals; Institute of Physics (IOP) Journals - HEAL-Link
title A comparison of fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T14%3A38%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pubmed&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20comparison%20of%20fully%20automatic%20segmentation%20of%20tumors%20and%20involved%20nodes%20in%20PET/CT%20of%20head%20and%20neck%20cancers&rft.jtitle=Physics%20in%20medicine%20&%20biology&rft.au=Groendahl,%20Aurora%20Rosvoll&rft.date=2021-02-11&rft.eissn=1361-6560&rft_id=info:doi/10.1088/1361-6560/abe553&rft_dat=%3Cpubmed%3E33571978%3C/pubmed%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/33571978&rfr_iscdi=true