Strategies for deep learning‐based attenuation and scatter correction of brain 18F‐FDG PET images in the image domain

Background Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmissio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical physics (Lancaster) 2024-02, Vol.51 (2), p.870-880
Hauptverfasser: Jahangir, Reza, Kamali‐Asl, Alireza, Arabi, Hossein, Zaidi, Habib
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 880
container_issue 2
container_start_page 870
container_title Medical physics (Lancaster)
container_volume 51
creator Jahangir, Reza
Kamali‐Asl, Alireza
Arabi, Hossein
Zaidi, Habib
description Background Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. Purpose In this study, different input settings were considered in the model training to investigate deep learning‐based AC in the image space. Methods Three different deep learning methods were developed for direct AC in the image space: (i) use of non‐attenuation‐corrected PET images as input (NonAC‐PET), (ii) use of attenuation‐corrected PET images with a simple two‐class AC map (composed of soft‐tissue and background air) obtained from NonAC‐PET images (PET segmentation‐based AC [SegAC‐PET]), and (iii) use of both NonAC‐PET and SegAC‐PET images in a Double‐Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC‐PET). Since a simple two‐class AC map (generated from NonAC‐PET images) can easily be generated, this work assessed the added value of incorporating SegAC‐PET images into direct AC in the image space. A 4‐fold cross‐validation scheme was adopted to train and evaluate the different models based using 80 brain 18F‐Fluorodeoxyglucose PET/CT images. The voxel‐wise and region‐wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. Results The overall root mean square error (RMSE) for the Double‐Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC‐PET and SegAC‐PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double‐Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC‐PET or SegAC‐PET as input, respectively. SegAC‐PET images with an SUV bias of ‐1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double‐Channel network, relying on both SegAC‐PET and NonAC‐PET images, outperformed the other AC models. Conclusion Since the generation of two‐class AC maps from non‐AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC‐PET images into a deep learning‐based direct
doi_str_mv 10.1002/mp.16914
format Article
fullrecord <record><control><sourceid>wiley</sourceid><recordid>TN_cdi_wiley_primary_10_1002_mp_16914_MP16914</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>MP16914</sourcerecordid><originalsourceid>FETCH-LOGICAL-w1234-518774bf40c9706d658c2c7bffbbe615a1e643fbd5e84b66d495916df2abed113</originalsourceid><addsrcrecordid>eNotkF1OwzAQhC0EEqUgcQRfIGXXsZ3kEZX-IBVRifIc2fG6BDU_coKqvnEEzshJSFueRvNpdzQaxu4RJgggHqp2gjpDecFGQiZxJAVkl2wEkMlISFDX7KbrPgFAxwpG7PDWB9PTtqSO-yZwR9TyHZlQl_X29_vHmo4cN31P9Zfpy6bmpna8K44k8KIJgYoTbjy3wZQ1x3Q-_M2fFnw92_CyMtsheuD9B50dd001HN6yK292Hd3965i9z2eb6TJavS6ep4-raI8ilpHCNEmk9RKKLAHttEoLUSTWe2tJozJIWsbeOkWptFo7makMtfPCWHKI8ZhF59x9uaND3oahRDjkCPlxr7xq89Ne-cv6pPEfzU9iHQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Strategies for deep learning‐based attenuation and scatter correction of brain 18F‐FDG PET images in the image domain</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Jahangir, Reza ; Kamali‐Asl, Alireza ; Arabi, Hossein ; Zaidi, Habib</creator><creatorcontrib>Jahangir, Reza ; Kamali‐Asl, Alireza ; Arabi, Hossein ; Zaidi, Habib</creatorcontrib><description>Background Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. Purpose In this study, different input settings were considered in the model training to investigate deep learning‐based AC in the image space. Methods Three different deep learning methods were developed for direct AC in the image space: (i) use of non‐attenuation‐corrected PET images as input (NonAC‐PET), (ii) use of attenuation‐corrected PET images with a simple two‐class AC map (composed of soft‐tissue and background air) obtained from NonAC‐PET images (PET segmentation‐based AC [SegAC‐PET]), and (iii) use of both NonAC‐PET and SegAC‐PET images in a Double‐Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC‐PET). Since a simple two‐class AC map (generated from NonAC‐PET images) can easily be generated, this work assessed the added value of incorporating SegAC‐PET images into direct AC in the image space. A 4‐fold cross‐validation scheme was adopted to train and evaluate the different models based using 80 brain 18F‐Fluorodeoxyglucose PET/CT images. The voxel‐wise and region‐wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. Results The overall root mean square error (RMSE) for the Double‐Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC‐PET and SegAC‐PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double‐Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC‐PET or SegAC‐PET as input, respectively. SegAC‐PET images with an SUV bias of ‐1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double‐Channel network, relying on both SegAC‐PET and NonAC‐PET images, outperformed the other AC models. Conclusion Since the generation of two‐class AC maps from non‐AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC‐PET images into a deep learning‐based direct AC approach. Altogether, compared with models that use only NonAC‐PET and SegAC‐PET images, the Double‐Channel deep learning network exhibited superior attenuation correction accuracy.</description><identifier>ISSN: 0094-2405</identifier><identifier>EISSN: 2473-4209</identifier><identifier>DOI: 10.1002/mp.16914</identifier><language>eng</language><subject>attenuation correction ; deep learning ; PET ; quantitative imaging ; radiomics</subject><ispartof>Medical physics (Lancaster), 2024-02, Vol.51 (2), p.870-880</ispartof><rights>2024 American Association of Physicists in Medicine.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fmp.16914$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fmp.16914$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27903,27904,45553,45554</link.rule.ids></links><search><creatorcontrib>Jahangir, Reza</creatorcontrib><creatorcontrib>Kamali‐Asl, Alireza</creatorcontrib><creatorcontrib>Arabi, Hossein</creatorcontrib><creatorcontrib>Zaidi, Habib</creatorcontrib><title>Strategies for deep learning‐based attenuation and scatter correction of brain 18F‐FDG PET images in the image domain</title><title>Medical physics (Lancaster)</title><description>Background Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. Purpose In this study, different input settings were considered in the model training to investigate deep learning‐based AC in the image space. Methods Three different deep learning methods were developed for direct AC in the image space: (i) use of non‐attenuation‐corrected PET images as input (NonAC‐PET), (ii) use of attenuation‐corrected PET images with a simple two‐class AC map (composed of soft‐tissue and background air) obtained from NonAC‐PET images (PET segmentation‐based AC [SegAC‐PET]), and (iii) use of both NonAC‐PET and SegAC‐PET images in a Double‐Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC‐PET). Since a simple two‐class AC map (generated from NonAC‐PET images) can easily be generated, this work assessed the added value of incorporating SegAC‐PET images into direct AC in the image space. A 4‐fold cross‐validation scheme was adopted to train and evaluate the different models based using 80 brain 18F‐Fluorodeoxyglucose PET/CT images. The voxel‐wise and region‐wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. Results The overall root mean square error (RMSE) for the Double‐Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC‐PET and SegAC‐PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double‐Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC‐PET or SegAC‐PET as input, respectively. SegAC‐PET images with an SUV bias of ‐1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double‐Channel network, relying on both SegAC‐PET and NonAC‐PET images, outperformed the other AC models. Conclusion Since the generation of two‐class AC maps from non‐AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC‐PET images into a deep learning‐based direct AC approach. Altogether, compared with models that use only NonAC‐PET and SegAC‐PET images, the Double‐Channel deep learning network exhibited superior attenuation correction accuracy.</description><subject>attenuation correction</subject><subject>deep learning</subject><subject>PET</subject><subject>quantitative imaging</subject><subject>radiomics</subject><issn>0094-2405</issn><issn>2473-4209</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid/><recordid>eNotkF1OwzAQhC0EEqUgcQRfIGXXsZ3kEZX-IBVRifIc2fG6BDU_coKqvnEEzshJSFueRvNpdzQaxu4RJgggHqp2gjpDecFGQiZxJAVkl2wEkMlISFDX7KbrPgFAxwpG7PDWB9PTtqSO-yZwR9TyHZlQl_X29_vHmo4cN31P9Zfpy6bmpna8K44k8KIJgYoTbjy3wZQ1x3Q-_M2fFnw92_CyMtsheuD9B50dd001HN6yK292Hd3965i9z2eb6TJavS6ep4-raI8ilpHCNEmk9RKKLAHttEoLUSTWe2tJozJIWsbeOkWptFo7makMtfPCWHKI8ZhF59x9uaND3oahRDjkCPlxr7xq89Ne-cv6pPEfzU9iHQ</recordid><startdate>202402</startdate><enddate>202402</enddate><creator>Jahangir, Reza</creator><creator>Kamali‐Asl, Alireza</creator><creator>Arabi, Hossein</creator><creator>Zaidi, Habib</creator><scope/></search><sort><creationdate>202402</creationdate><title>Strategies for deep learning‐based attenuation and scatter correction of brain 18F‐FDG PET images in the image domain</title><author>Jahangir, Reza ; Kamali‐Asl, Alireza ; Arabi, Hossein ; Zaidi, Habib</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-w1234-518774bf40c9706d658c2c7bffbbe615a1e643fbd5e84b66d495916df2abed113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>attenuation correction</topic><topic>deep learning</topic><topic>PET</topic><topic>quantitative imaging</topic><topic>radiomics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jahangir, Reza</creatorcontrib><creatorcontrib>Kamali‐Asl, Alireza</creatorcontrib><creatorcontrib>Arabi, Hossein</creatorcontrib><creatorcontrib>Zaidi, Habib</creatorcontrib><jtitle>Medical physics (Lancaster)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jahangir, Reza</au><au>Kamali‐Asl, Alireza</au><au>Arabi, Hossein</au><au>Zaidi, Habib</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Strategies for deep learning‐based attenuation and scatter correction of brain 18F‐FDG PET images in the image domain</atitle><jtitle>Medical physics (Lancaster)</jtitle><date>2024-02</date><risdate>2024</risdate><volume>51</volume><issue>2</issue><spage>870</spage><epage>880</epage><pages>870-880</pages><issn>0094-2405</issn><eissn>2473-4209</eissn><abstract>Background Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. Purpose In this study, different input settings were considered in the model training to investigate deep learning‐based AC in the image space. Methods Three different deep learning methods were developed for direct AC in the image space: (i) use of non‐attenuation‐corrected PET images as input (NonAC‐PET), (ii) use of attenuation‐corrected PET images with a simple two‐class AC map (composed of soft‐tissue and background air) obtained from NonAC‐PET images (PET segmentation‐based AC [SegAC‐PET]), and (iii) use of both NonAC‐PET and SegAC‐PET images in a Double‐Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC‐PET). Since a simple two‐class AC map (generated from NonAC‐PET images) can easily be generated, this work assessed the added value of incorporating SegAC‐PET images into direct AC in the image space. A 4‐fold cross‐validation scheme was adopted to train and evaluate the different models based using 80 brain 18F‐Fluorodeoxyglucose PET/CT images. The voxel‐wise and region‐wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. Results The overall root mean square error (RMSE) for the Double‐Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC‐PET and SegAC‐PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double‐Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC‐PET or SegAC‐PET as input, respectively. SegAC‐PET images with an SUV bias of ‐1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double‐Channel network, relying on both SegAC‐PET and NonAC‐PET images, outperformed the other AC models. Conclusion Since the generation of two‐class AC maps from non‐AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC‐PET images into a deep learning‐based direct AC approach. Altogether, compared with models that use only NonAC‐PET and SegAC‐PET images, the Double‐Channel deep learning network exhibited superior attenuation correction accuracy.</abstract><doi>10.1002/mp.16914</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0094-2405
ispartof Medical physics (Lancaster), 2024-02, Vol.51 (2), p.870-880
issn 0094-2405
2473-4209
language eng
recordid cdi_wiley_primary_10_1002_mp_16914_MP16914
source Wiley Online Library Journals Frontfile Complete
subjects attenuation correction
deep learning
PET
quantitative imaging
radiomics
title Strategies for deep learning‐based attenuation and scatter correction of brain 18F‐FDG PET images in the image domain
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T19%3A17%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wiley&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Strategies%20for%20deep%20learning%E2%80%90based%20attenuation%20and%20scatter%20correction%20of%20brain%2018F%E2%80%90FDG%20PET%20images%20in%20the%20image%20domain&rft.jtitle=Medical%20physics%20(Lancaster)&rft.au=Jahangir,%20Reza&rft.date=2024-02&rft.volume=51&rft.issue=2&rft.spage=870&rft.epage=880&rft.pages=870-880&rft.issn=0094-2405&rft.eissn=2473-4209&rft_id=info:doi/10.1002/mp.16914&rft_dat=%3Cwiley%3EMP16914%3C/wiley%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true