Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis
Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons. Approach. We propose...
Gespeichert in:
Veröffentlicht in: | Physics in medicine & biology 2024-08, Vol.69 (15), p.155012 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 15 |
container_start_page | 155012 |
container_title | Physics in medicine & biology |
container_volume | 69 |
creator | Touati, Redha Trung Le, William Kadoury, Samuel |
description | Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons. Approach. We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network. Results. The proposed model achieves a mean absolute error (MAE) of $18.76 (5.167)$ in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of $0.95 (0.09)$ and a Frechet inception distance (FID) of $145.60 (8.38)$. The model yields a MAE of $26.83 (8.27)$ to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of $0.73 (0.06)$ and a FID distance equal to $122.58 (7.55)$. The improvement of our model over other state-of-the-art GAN approaches is of $3.8\%$, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio (PSNRs) of $27.89 (2.22)$ and $26.08 (2.95)$ to synthesize MRI from CT input. Significance. The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored. |
doi_str_mv | 10.1088/1361-6560/ad611a |
format | Article |
fullrecord | <record><control><sourceid>proquest_iop_j</sourceid><recordid>TN_cdi_iop_journals_10_1088_1361_6560_ad611a</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3077990892</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-a4d3e4be678674cd063c04a6c437b0ea660dbd3d118a7e6cdfe75d94dd174e3</originalsourceid><addsrcrecordid>eNp9kM1P3DAQxa2qqCy0954qHzk0xV47TnysthSQQEiwd2vimZSw-aqdFO1_X6924YSQRvLI-r2neY-xr1L8kKIsz6UyMjO5EeeARkr4wBavXx_ZQgglMyvz_JidxPgkhJTlUn9ix6q0pcytWrDN7dxOTTa20EPgOEPLAf9RiBCatPc0PQ9hwyuIhHzoOW576BrP1S9eE0xzoMjrIfDb--tsteaPBMihxyT0G9508Id43PbTI8UmfmZHNbSRvhzeU_bw-2K9uspu7i6vVz9vMr-0espAoyJdkSlKU2iPwigvNBivVVEJAmMEVqgwhYGCjMeaihytRpSFJnXKzvauYxj-zhQn1zXRU5sS0jBHp0RRWCtKu0yo2KM-DDEGqt0Y0s1h66Rwu4Ldrk23a9PtC06Sbwf3ueoIXwUvjSbg-x5ohtE9DXPoU9T3_M7ewMeucsY6mafJhVy6EWv1HyYkkkM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3077990892</pqid></control><display><type>article</type><title>Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis</title><source>HEAL-Link subscriptions: Institute of Physics (IOP) Journals</source><source>Institute of Physics Journals</source><creator>Touati, Redha ; Trung Le, William ; Kadoury, Samuel</creator><creatorcontrib>Touati, Redha ; Trung Le, William ; Kadoury, Samuel</creatorcontrib><description>Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons. Approach. We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network. Results. The proposed model achieves a mean absolute error (MAE) of $18.76 (5.167)$ in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of $0.95 (0.09)$ and a Frechet inception distance (FID) of $145.60 (8.38)$. The model yields a MAE of $26.83 (8.27)$ to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of $0.73 (0.06)$ and a FID distance equal to $122.58 (7.55)$. The improvement of our model over other state-of-the-art GAN approaches is of $3.8\%$, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio (PSNRs) of $27.89 (2.22)$ and $26.08 (2.95)$ to synthesize MRI from CT input. Significance. The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.</description><identifier>ISSN: 0031-9155</identifier><identifier>ISSN: 1361-6560</identifier><identifier>EISSN: 1361-6560</identifier><identifier>DOI: 10.1088/1361-6560/ad611a</identifier><identifier>PMID: 38981593</identifier><identifier>CODEN: PHMBA7</identifier><language>eng</language><publisher>England: IOP Publishing</publisher><subject>3D multi-view image modeling ; adversarial network ; dual feature learning ; dynamic features ; generative network model ; image generation</subject><ispartof>Physics in medicine & biology, 2024-08, Vol.69 (15), p.155012</ispartof><rights>2024 The Author(s). Published on behalf of Institute of Physics and Engineering in Medicine by IOP Publishing Ltd</rights><rights>Creative Commons Attribution license.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c294t-a4d3e4be678674cd063c04a6c437b0ea660dbd3d118a7e6cdfe75d94dd174e3</cites><orcidid>0000-0003-3845-5361 ; 0000-0002-3048-4291</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://iopscience.iop.org/article/10.1088/1361-6560/ad611a/pdf$$EPDF$$P50$$Giop$$Hfree_for_read</linktopdf><link.rule.ids>314,780,784,27924,27925,53846,53893</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38981593$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Touati, Redha</creatorcontrib><creatorcontrib>Trung Le, William</creatorcontrib><creatorcontrib>Kadoury, Samuel</creatorcontrib><title>Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis</title><title>Physics in medicine & biology</title><addtitle>PMB</addtitle><addtitle>Phys. Med. Biol</addtitle><description>Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons. Approach. We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network. Results. The proposed model achieves a mean absolute error (MAE) of $18.76 (5.167)$ in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of $0.95 (0.09)$ and a Frechet inception distance (FID) of $145.60 (8.38)$. The model yields a MAE of $26.83 (8.27)$ to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of $0.73 (0.06)$ and a FID distance equal to $122.58 (7.55)$. The improvement of our model over other state-of-the-art GAN approaches is of $3.8\%$, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio (PSNRs) of $27.89 (2.22)$ and $26.08 (2.95)$ to synthesize MRI from CT input. Significance. The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.</description><subject>3D multi-view image modeling</subject><subject>adversarial network</subject><subject>dual feature learning</subject><subject>dynamic features</subject><subject>generative network model</subject><subject>image generation</subject><issn>0031-9155</issn><issn>1361-6560</issn><issn>1361-6560</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>O3W</sourceid><recordid>eNp9kM1P3DAQxa2qqCy0954qHzk0xV47TnysthSQQEiwd2vimZSw-aqdFO1_X6924YSQRvLI-r2neY-xr1L8kKIsz6UyMjO5EeeARkr4wBavXx_ZQgglMyvz_JidxPgkhJTlUn9ix6q0pcytWrDN7dxOTTa20EPgOEPLAf9RiBCatPc0PQ9hwyuIhHzoOW576BrP1S9eE0xzoMjrIfDb--tsteaPBMihxyT0G9508Id43PbTI8UmfmZHNbSRvhzeU_bw-2K9uspu7i6vVz9vMr-0espAoyJdkSlKU2iPwigvNBivVVEJAmMEVqgwhYGCjMeaihytRpSFJnXKzvauYxj-zhQn1zXRU5sS0jBHp0RRWCtKu0yo2KM-DDEGqt0Y0s1h66Rwu4Ldrk23a9PtC06Sbwf3ueoIXwUvjSbg-x5ohtE9DXPoU9T3_M7ewMeucsY6mafJhVy6EWv1HyYkkkM</recordid><startdate>20240807</startdate><enddate>20240807</enddate><creator>Touati, Redha</creator><creator>Trung Le, William</creator><creator>Kadoury, Samuel</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3845-5361</orcidid><orcidid>https://orcid.org/0000-0002-3048-4291</orcidid></search><sort><creationdate>20240807</creationdate><title>Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis</title><author>Touati, Redha ; Trung Le, William ; Kadoury, Samuel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-a4d3e4be678674cd063c04a6c437b0ea660dbd3d118a7e6cdfe75d94dd174e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>3D multi-view image modeling</topic><topic>adversarial network</topic><topic>dual feature learning</topic><topic>dynamic features</topic><topic>generative network model</topic><topic>image generation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Touati, Redha</creatorcontrib><creatorcontrib>Trung Le, William</creatorcontrib><creatorcontrib>Kadoury, Samuel</creatorcontrib><collection>Open Access: IOP Publishing Free Content</collection><collection>IOPscience (Open Access)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Physics in medicine & biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Touati, Redha</au><au>Trung Le, William</au><au>Kadoury, Samuel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis</atitle><jtitle>Physics in medicine & biology</jtitle><stitle>PMB</stitle><addtitle>Phys. Med. Biol</addtitle><date>2024-08-07</date><risdate>2024</risdate><volume>69</volume><issue>15</issue><spage>155012</spage><pages>155012-</pages><issn>0031-9155</issn><issn>1361-6560</issn><eissn>1361-6560</eissn><coden>PHMBA7</coden><abstract>Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons. Approach. We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network. Results. The proposed model achieves a mean absolute error (MAE) of $18.76 (5.167)$ in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of $0.95 (0.09)$ and a Frechet inception distance (FID) of $145.60 (8.38)$. The model yields a MAE of $26.83 (8.27)$ to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of $0.73 (0.06)$ and a FID distance equal to $122.58 (7.55)$. The improvement of our model over other state-of-the-art GAN approaches is of $3.8\%$, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio (PSNRs) of $27.89 (2.22)$ and $26.08 (2.95)$ to synthesize MRI from CT input. Significance. The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.</abstract><cop>England</cop><pub>IOP Publishing</pub><pmid>38981593</pmid><doi>10.1088/1361-6560/ad611a</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0003-3845-5361</orcidid><orcidid>https://orcid.org/0000-0002-3048-4291</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0031-9155 |
ispartof | Physics in medicine & biology, 2024-08, Vol.69 (15), p.155012 |
issn | 0031-9155 1361-6560 1361-6560 |
language | eng |
recordid | cdi_iop_journals_10_1088_1361_6560_ad611a |
source | HEAL-Link subscriptions: Institute of Physics (IOP) Journals; Institute of Physics Journals |
subjects | 3D multi-view image modeling adversarial network dual feature learning dynamic features generative network model image generation |
title | Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T06%3A24%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_iop_j&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-planar%20dual%20adversarial%20network%20based%20on%20dynamic%203D%20features%20for%20MRI-CT%20head%20and%20neck%20image%20synthesis&rft.jtitle=Physics%20in%20medicine%20&%20biology&rft.au=Touati,%20Redha&rft.date=2024-08-07&rft.volume=69&rft.issue=15&rft.spage=155012&rft.pages=155012-&rft.issn=0031-9155&rft.eissn=1361-6560&rft.coden=PHMBA7&rft_id=info:doi/10.1088/1361-6560/ad611a&rft_dat=%3Cproquest_iop_j%3E3077990892%3C/proquest_iop_j%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3077990892&rft_id=info:pmid/38981593&rfr_iscdi=true |