Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains

We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hund...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Human brain mapping 2022-11, Vol.43 (16), p.5032-5043
Hauptverfasser: Sanaat, Amirhossein, Akhavanalaf, Azadeh, Shiri, Isaac, Salimi, Yazdan, Arabi, Hossein, Zaidi, Habib
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5043
container_issue 16
container_start_page 5032
container_title Human brain mapping
container_volume 43
creator Sanaat, Amirhossein
Akhavanalaf, Azadeh
Shiri, Isaac
Salimi, Yazdan
Arabi, Hossein
Zaidi, Habib
description We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18F‐FDG PET/CT scans were collected to generate TOF and non‐TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non‐TOF. Wide‐ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF‐PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non‐TOF‐PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (−0.02%) and minimum variance (95% CI: −0.17%, +0.21%) were achieved for TOF‐PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non‐TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality. We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality.
doi_str_mv 10.1002/hbm.26068
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2712845070</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2726143196</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3888-5900d50e87667631dd5d2b512628d230665dc937a6f2eda8bfa1aa0c30a2508b3</originalsourceid><addsrcrecordid>eNp1kc9u1DAQhy1ERUvLgRdAlrjAIe3YXjsONyj9g1TUHpZz5MSTrFeJvdiJUG-8ABLPyJPgbQoHJC5jz_jTp7F-hLxkcMoA-NmmGU-5AqWfkCMGVVkAq8TT_V3JolqV7JA8T2kLwJgE9owcCgW6hIofkR8fEXe_vv9c317menexfkf3Ezqgid75Pg_72Vm0tEeP0UwueBo6OrkR81vocukG128m2sUwUh_8YqNNNM7TbKRuND0mmrtpg0tHjbd0F8MW2wejDWOm0wk56MyQ8MXjeUy-XF6sz6-Lm9urT-fvb4pWaK0LWQFYCahLpUolmLXS8kYyrri2XIBS0raVKI3qOFqjm84wY6AVYLgE3Yhj8mbx5hW-zpimenSpxWEwHsOcal4yrlcSSsjo63_QbZijz9tliiu2EqxSmXq7UG0MKUXs6l3MH433NYN6n1GdM6ofMsrsq0fj3Ixo_5J_QsnA2QJ8cwPe_99UX3_4vCh_A53gn4E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2726143196</pqid></control><display><type>article</type><title>Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Wiley-Blackwell Open Access Titles</source><source>Wiley Online Library All Journals</source><source>PubMed Central</source><creator>Sanaat, Amirhossein ; Akhavanalaf, Azadeh ; Shiri, Isaac ; Salimi, Yazdan ; Arabi, Hossein ; Zaidi, Habib</creator><creatorcontrib>Sanaat, Amirhossein ; Akhavanalaf, Azadeh ; Shiri, Isaac ; Salimi, Yazdan ; Arabi, Hossein ; Zaidi, Habib</creatorcontrib><description>We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18F‐FDG PET/CT scans were collected to generate TOF and non‐TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non‐TOF. Wide‐ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF‐PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non‐TOF‐PET images. The Bland &amp; Altman analysis revealed that the lowest tracer uptake value bias (−0.02%) and minimum variance (95% CI: −0.17%, +0.21%) were achieved for TOF‐PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non‐TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality. We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality.</description><identifier>ISSN: 1065-9471</identifier><identifier>ISSN: 1097-0193</identifier><identifier>EISSN: 1097-0193</identifier><identifier>DOI: 10.1002/hbm.26068</identifier><identifier>PMID: 36087092</identifier><language>eng</language><publisher>Hoboken, USA: John Wiley &amp; Sons, Inc</publisher><subject>Abnormalities ; Algorithms ; Bias ; Brain ; Brain - diagnostic imaging ; brain imaging ; Computed tomography ; Datasets ; Deep Learning ; Feature extraction ; Fluorodeoxyglucose F18 ; Humans ; Image contrast ; Image Processing, Computer-Assisted - methods ; Image quality ; Localization ; Medical imaging ; Performance evaluation ; PET/CT ; Positron emission ; Positron emission tomography ; Positron Emission Tomography Computed Tomography ; Radiomics ; Root-mean-square errors ; Scanners ; Statistical analysis ; time‐of‐flight</subject><ispartof>Human brain mapping, 2022-11, Vol.43 (16), p.5032-5043</ispartof><rights>2022 The Authors. published by Wiley Periodicals LLC.</rights><rights>2022 The Authors. Human Brain Mapping published by Wiley Periodicals LLC.</rights><rights>2022. This work is published under http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3888-5900d50e87667631dd5d2b512628d230665dc937a6f2eda8bfa1aa0c30a2508b3</citedby><cites>FETCH-LOGICAL-c3888-5900d50e87667631dd5d2b512628d230665dc937a6f2eda8bfa1aa0c30a2508b3</cites><orcidid>0000-0001-8437-2060 ; 0000-0001-7559-5297</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fhbm.26068$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fhbm.26068$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,1417,11562,27924,27925,45574,45575,46052,46476</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36087092$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sanaat, Amirhossein</creatorcontrib><creatorcontrib>Akhavanalaf, Azadeh</creatorcontrib><creatorcontrib>Shiri, Isaac</creatorcontrib><creatorcontrib>Salimi, Yazdan</creatorcontrib><creatorcontrib>Arabi, Hossein</creatorcontrib><creatorcontrib>Zaidi, Habib</creatorcontrib><title>Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains</title><title>Human brain mapping</title><addtitle>Hum Brain Mapp</addtitle><description>We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18F‐FDG PET/CT scans were collected to generate TOF and non‐TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non‐TOF. Wide‐ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF‐PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non‐TOF‐PET images. The Bland &amp; Altman analysis revealed that the lowest tracer uptake value bias (−0.02%) and minimum variance (95% CI: −0.17%, +0.21%) were achieved for TOF‐PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non‐TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality. We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality.</description><subject>Abnormalities</subject><subject>Algorithms</subject><subject>Bias</subject><subject>Brain</subject><subject>Brain - diagnostic imaging</subject><subject>brain imaging</subject><subject>Computed tomography</subject><subject>Datasets</subject><subject>Deep Learning</subject><subject>Feature extraction</subject><subject>Fluorodeoxyglucose F18</subject><subject>Humans</subject><subject>Image contrast</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Image quality</subject><subject>Localization</subject><subject>Medical imaging</subject><subject>Performance evaluation</subject><subject>PET/CT</subject><subject>Positron emission</subject><subject>Positron emission tomography</subject><subject>Positron Emission Tomography Computed Tomography</subject><subject>Radiomics</subject><subject>Root-mean-square errors</subject><subject>Scanners</subject><subject>Statistical analysis</subject><subject>time‐of‐flight</subject><issn>1065-9471</issn><issn>1097-0193</issn><issn>1097-0193</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>WIN</sourceid><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp1kc9u1DAQhy1ERUvLgRdAlrjAIe3YXjsONyj9g1TUHpZz5MSTrFeJvdiJUG-8ABLPyJPgbQoHJC5jz_jTp7F-hLxkcMoA-NmmGU-5AqWfkCMGVVkAq8TT_V3JolqV7JA8T2kLwJgE9owcCgW6hIofkR8fEXe_vv9c317menexfkf3Ezqgid75Pg_72Vm0tEeP0UwueBo6OrkR81vocukG128m2sUwUh_8YqNNNM7TbKRuND0mmrtpg0tHjbd0F8MW2wejDWOm0wk56MyQ8MXjeUy-XF6sz6-Lm9urT-fvb4pWaK0LWQFYCahLpUolmLXS8kYyrri2XIBS0raVKI3qOFqjm84wY6AVYLgE3Yhj8mbx5hW-zpimenSpxWEwHsOcal4yrlcSSsjo63_QbZijz9tliiu2EqxSmXq7UG0MKUXs6l3MH433NYN6n1GdM6ofMsrsq0fj3Ixo_5J_QsnA2QJ8cwPe_99UX3_4vCh_A53gn4E</recordid><startdate>202211</startdate><enddate>202211</enddate><creator>Sanaat, Amirhossein</creator><creator>Akhavanalaf, Azadeh</creator><creator>Shiri, Isaac</creator><creator>Salimi, Yazdan</creator><creator>Arabi, Hossein</creator><creator>Zaidi, Habib</creator><general>John Wiley &amp; Sons, Inc</general><scope>24P</scope><scope>WIN</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QR</scope><scope>7TK</scope><scope>7U7</scope><scope>7X7</scope><scope>7XB</scope><scope>8FD</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-8437-2060</orcidid><orcidid>https://orcid.org/0000-0001-7559-5297</orcidid></search><sort><creationdate>202211</creationdate><title>Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains</title><author>Sanaat, Amirhossein ; Akhavanalaf, Azadeh ; Shiri, Isaac ; Salimi, Yazdan ; Arabi, Hossein ; Zaidi, Habib</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3888-5900d50e87667631dd5d2b512628d230665dc937a6f2eda8bfa1aa0c30a2508b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Abnormalities</topic><topic>Algorithms</topic><topic>Bias</topic><topic>Brain</topic><topic>Brain - diagnostic imaging</topic><topic>brain imaging</topic><topic>Computed tomography</topic><topic>Datasets</topic><topic>Deep Learning</topic><topic>Feature extraction</topic><topic>Fluorodeoxyglucose F18</topic><topic>Humans</topic><topic>Image contrast</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Image quality</topic><topic>Localization</topic><topic>Medical imaging</topic><topic>Performance evaluation</topic><topic>PET/CT</topic><topic>Positron emission</topic><topic>Positron emission tomography</topic><topic>Positron Emission Tomography Computed Tomography</topic><topic>Radiomics</topic><topic>Root-mean-square errors</topic><topic>Scanners</topic><topic>Statistical analysis</topic><topic>time‐of‐flight</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sanaat, Amirhossein</creatorcontrib><creatorcontrib>Akhavanalaf, Azadeh</creatorcontrib><creatorcontrib>Shiri, Isaac</creatorcontrib><creatorcontrib>Salimi, Yazdan</creatorcontrib><creatorcontrib>Arabi, Hossein</creatorcontrib><creatorcontrib>Zaidi, Habib</creatorcontrib><collection>Wiley-Blackwell Open Access Titles</collection><collection>Wiley Free Content</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Chemoreception Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Toxicology Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Technology Research Database</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><jtitle>Human brain mapping</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sanaat, Amirhossein</au><au>Akhavanalaf, Azadeh</au><au>Shiri, Isaac</au><au>Salimi, Yazdan</au><au>Arabi, Hossein</au><au>Zaidi, Habib</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains</atitle><jtitle>Human brain mapping</jtitle><addtitle>Hum Brain Mapp</addtitle><date>2022-11</date><risdate>2022</risdate><volume>43</volume><issue>16</issue><spage>5032</spage><epage>5043</epage><pages>5032-5043</pages><issn>1065-9471</issn><issn>1097-0193</issn><eissn>1097-0193</eissn><abstract>We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18F‐FDG PET/CT scans were collected to generate TOF and non‐TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non‐TOF. Wide‐ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF‐PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non‐TOF‐PET images. The Bland &amp; Altman analysis revealed that the lowest tracer uptake value bias (−0.02%) and minimum variance (95% CI: −0.17%, +0.21%) were achieved for TOF‐PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non‐TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality. We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality.</abstract><cop>Hoboken, USA</cop><pub>John Wiley &amp; Sons, Inc</pub><pmid>36087092</pmid><doi>10.1002/hbm.26068</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-8437-2060</orcidid><orcidid>https://orcid.org/0000-0001-7559-5297</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1065-9471
ispartof Human brain mapping, 2022-11, Vol.43 (16), p.5032-5043
issn 1065-9471
1097-0193
1097-0193
language eng
recordid cdi_proquest_miscellaneous_2712845070
source MEDLINE; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Wiley-Blackwell Open Access Titles; Wiley Online Library All Journals; PubMed Central
subjects Abnormalities
Algorithms
Bias
Brain
Brain - diagnostic imaging
brain imaging
Computed tomography
Datasets
Deep Learning
Feature extraction
Fluorodeoxyglucose F18
Humans
Image contrast
Image Processing, Computer-Assisted - methods
Image quality
Localization
Medical imaging
Performance evaluation
PET/CT
Positron emission
Positron emission tomography
Positron Emission Tomography Computed Tomography
Radiomics
Root-mean-square errors
Scanners
Statistical analysis
time‐of‐flight
title Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T16%3A43%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%E2%80%90TOF%E2%80%90PET:%20Deep%20learning%E2%80%90guided%20generation%20of%20time%E2%80%90of%E2%80%90flight%20from%20non%E2%80%90TOF%20brain%20PET%20images%20in%20the%20image%20and%20projection%20domains&rft.jtitle=Human%20brain%20mapping&rft.au=Sanaat,%20Amirhossein&rft.date=2022-11&rft.volume=43&rft.issue=16&rft.spage=5032&rft.epage=5043&rft.pages=5032-5043&rft.issn=1065-9471&rft.eissn=1097-0193&rft_id=info:doi/10.1002/hbm.26068&rft_dat=%3Cproquest_cross%3E2726143196%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2726143196&rft_id=info:pmid/36087092&rfr_iscdi=true