A least square generative network based on invariant contrastive feature pair learning for multimodal MR image synthesis

Purpose During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal for computer assisted radiology and surgery 2023-06, Vol.18 (6), p.971-979
Hauptverfasser: Touati, Redha, Kadoury, Samuel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 979
container_issue 6
container_start_page 971
container_title International journal for computer assisted radiology and surgery
container_volume 18
creator Touati, Redha
Kadoury, Samuel
description Purpose During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous MR sequences could alleviate timing constraints. Methods We propose a new multimodal MR synthesis approach leveraging a combination of MR modalities presenting glioblastomas to generate an additional modality. The proposed learning approach relies on a least square GAN (LSGAN) using an unsupervised contrastive learning strategy. We incorporate a contrastive encoder, which extracts an invariant contrastive representation from augmented pairs of the generated and real target MR contrasts. This contrastive representation describes a pair of features for each input channel, allowing to regularize the generator to be invariant to the high-frequency orientations. Moreover, when training the generator, we impose on the LSGAN loss another term reformulated as the combination of a reconstruction and a novel perception loss based on a pair of features. Results When compared to other multimodal MR synthesis approaches evaluated on the BraTS’18 brain dataset, the model yields the highest Dice score with 0.748 ± 0.04 and achieves the lowest variability information of 2.1 ± 1.11 , with a probability rand index score of 0.84 ± 0.03 and a global consistency error of 0.17 ± 0.04 . Conclusion The proposed model allows to generate reliable MR contrasts with enhanced tumors on the synthesized image using a brain tumor dataset (BraTS’18). In future work, we will perform a clinical evaluation of residual tumor segmentations during MR-guided neurosurgeries, where limited MR contrasts will be acquired during the procedure.
doi_str_mv 10.1007/s11548-023-02916-z
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2806995751</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2806995751</sourcerecordid><originalsourceid>FETCH-LOGICAL-c326t-1b7b4bd54d7d5b6ff61be1a052f5d14a55986d312b23f7f51c34d24fe9bf8f7b3</originalsourceid><addsrcrecordid>eNp9kUtvFSEYhonR2Fr9Ay4MiRs3Y7kOM8umqZekpompawLDx5E6A6fAVNtfL6enXuLCBYGE530_yIPQS0reUkLUcaFUiqEjjLc10r67e4QO6dDTrhdsfPzX-QA9K-WKECEVl0_RAVeUcMXUIfpxgmcwpeJyvZoMeAMRsqnhBnCE-j3lb9iaAg6niEO8MTmYWPGUYs0ttcM8mLq25NaEvOvKMcQN9injZZ1rWJIzM_70GYfFbACX21i_QgnlOXrizVzgxcN-hL68O7s8_dCdX7z_eHpy3k2c9bWjVllhnRROOWl773tqgRoimZeOCiPlOPSOU2YZ98pLOnHhmPAwWj94ZfkRerPv3eZ0vUKpegllgnk2EdJaNBtIP45SSdrQ1_-gV2nNsb2uUWzg4zgI1Si2p6acSsng9Ta3v-VbTYneedF7L7p50fde9F0LvXqoXu0C7nfkl4gG8D1Q2lXcQP4z-z-1PwFaEZtL</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2828399847</pqid></control><display><type>article</type><title>A least square generative network based on invariant contrastive feature pair learning for multimodal MR image synthesis</title><source>SpringerNature Journals</source><creator>Touati, Redha ; Kadoury, Samuel</creator><creatorcontrib>Touati, Redha ; Kadoury, Samuel</creatorcontrib><description>Purpose During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous MR sequences could alleviate timing constraints. Methods We propose a new multimodal MR synthesis approach leveraging a combination of MR modalities presenting glioblastomas to generate an additional modality. The proposed learning approach relies on a least square GAN (LSGAN) using an unsupervised contrastive learning strategy. We incorporate a contrastive encoder, which extracts an invariant contrastive representation from augmented pairs of the generated and real target MR contrasts. This contrastive representation describes a pair of features for each input channel, allowing to regularize the generator to be invariant to the high-frequency orientations. Moreover, when training the generator, we impose on the LSGAN loss another term reformulated as the combination of a reconstruction and a novel perception loss based on a pair of features. Results When compared to other multimodal MR synthesis approaches evaluated on the BraTS’18 brain dataset, the model yields the highest Dice score with 0.748 ± 0.04 and achieves the lowest variability information of 2.1 ± 1.11 , with a probability rand index score of 0.84 ± 0.03 and a global consistency error of 0.17 ± 0.04 . Conclusion The proposed model allows to generate reliable MR contrasts with enhanced tumors on the synthesized image using a brain tumor dataset (BraTS’18). In future work, we will perform a clinical evaluation of residual tumor segmentations during MR-guided neurosurgeries, where limited MR contrasts will be acquired during the procedure.</description><identifier>ISSN: 1861-6429</identifier><identifier>ISSN: 1861-6410</identifier><identifier>EISSN: 1861-6429</identifier><identifier>DOI: 10.1007/s11548-023-02916-z</identifier><identifier>PMID: 37103727</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Brain ; Coders ; Computer Imaging ; Computer Science ; Datasets ; Health Informatics ; Image enhancement ; Imaging ; Invariants ; Learning ; Least squares ; Medicine ; Medicine &amp; Public Health ; Original Article ; Pattern Recognition and Graphics ; Radiology ; Representations ; Surgery ; Synthesis ; Tumors ; Vision</subject><ispartof>International journal for computer assisted radiology and surgery, 2023-06, Vol.18 (6), p.971-979</ispartof><rights>CARS 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>2023. CARS.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c326t-1b7b4bd54d7d5b6ff61be1a052f5d14a55986d312b23f7f51c34d24fe9bf8f7b3</cites><orcidid>0000-0003-3845-5361</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11548-023-02916-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11548-023-02916-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37103727$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Touati, Redha</creatorcontrib><creatorcontrib>Kadoury, Samuel</creatorcontrib><title>A least square generative network based on invariant contrastive feature pair learning for multimodal MR image synthesis</title><title>International journal for computer assisted radiology and surgery</title><addtitle>Int J CARS</addtitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><description>Purpose During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous MR sequences could alleviate timing constraints. Methods We propose a new multimodal MR synthesis approach leveraging a combination of MR modalities presenting glioblastomas to generate an additional modality. The proposed learning approach relies on a least square GAN (LSGAN) using an unsupervised contrastive learning strategy. We incorporate a contrastive encoder, which extracts an invariant contrastive representation from augmented pairs of the generated and real target MR contrasts. This contrastive representation describes a pair of features for each input channel, allowing to regularize the generator to be invariant to the high-frequency orientations. Moreover, when training the generator, we impose on the LSGAN loss another term reformulated as the combination of a reconstruction and a novel perception loss based on a pair of features. Results When compared to other multimodal MR synthesis approaches evaluated on the BraTS’18 brain dataset, the model yields the highest Dice score with 0.748 ± 0.04 and achieves the lowest variability information of 2.1 ± 1.11 , with a probability rand index score of 0.84 ± 0.03 and a global consistency error of 0.17 ± 0.04 . Conclusion The proposed model allows to generate reliable MR contrasts with enhanced tumors on the synthesized image using a brain tumor dataset (BraTS’18). In future work, we will perform a clinical evaluation of residual tumor segmentations during MR-guided neurosurgeries, where limited MR contrasts will be acquired during the procedure.</description><subject>Brain</subject><subject>Coders</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Health Informatics</subject><subject>Image enhancement</subject><subject>Imaging</subject><subject>Invariants</subject><subject>Learning</subject><subject>Least squares</subject><subject>Medicine</subject><subject>Medicine &amp; Public Health</subject><subject>Original Article</subject><subject>Pattern Recognition and Graphics</subject><subject>Radiology</subject><subject>Representations</subject><subject>Surgery</subject><subject>Synthesis</subject><subject>Tumors</subject><subject>Vision</subject><issn>1861-6429</issn><issn>1861-6410</issn><issn>1861-6429</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kUtvFSEYhonR2Fr9Ay4MiRs3Y7kOM8umqZekpompawLDx5E6A6fAVNtfL6enXuLCBYGE530_yIPQS0reUkLUcaFUiqEjjLc10r67e4QO6dDTrhdsfPzX-QA9K-WKECEVl0_RAVeUcMXUIfpxgmcwpeJyvZoMeAMRsqnhBnCE-j3lb9iaAg6niEO8MTmYWPGUYs0ttcM8mLq25NaEvOvKMcQN9injZZ1rWJIzM_70GYfFbACX21i_QgnlOXrizVzgxcN-hL68O7s8_dCdX7z_eHpy3k2c9bWjVllhnRROOWl773tqgRoimZeOCiPlOPSOU2YZ98pLOnHhmPAwWj94ZfkRerPv3eZ0vUKpegllgnk2EdJaNBtIP45SSdrQ1_-gV2nNsb2uUWzg4zgI1Si2p6acSsng9Ta3v-VbTYneedF7L7p50fde9F0LvXqoXu0C7nfkl4gG8D1Q2lXcQP4z-z-1PwFaEZtL</recordid><startdate>20230601</startdate><enddate>20230601</enddate><creator>Touati, Redha</creator><creator>Kadoury, Samuel</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3845-5361</orcidid></search><sort><creationdate>20230601</creationdate><title>A least square generative network based on invariant contrastive feature pair learning for multimodal MR image synthesis</title><author>Touati, Redha ; Kadoury, Samuel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c326t-1b7b4bd54d7d5b6ff61be1a052f5d14a55986d312b23f7f51c34d24fe9bf8f7b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Brain</topic><topic>Coders</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Health Informatics</topic><topic>Image enhancement</topic><topic>Imaging</topic><topic>Invariants</topic><topic>Learning</topic><topic>Least squares</topic><topic>Medicine</topic><topic>Medicine &amp; Public Health</topic><topic>Original Article</topic><topic>Pattern Recognition and Graphics</topic><topic>Radiology</topic><topic>Representations</topic><topic>Surgery</topic><topic>Synthesis</topic><topic>Tumors</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Touati, Redha</creatorcontrib><creatorcontrib>Kadoury, Samuel</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>International journal for computer assisted radiology and surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Touati, Redha</au><au>Kadoury, Samuel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A least square generative network based on invariant contrastive feature pair learning for multimodal MR image synthesis</atitle><jtitle>International journal for computer assisted radiology and surgery</jtitle><stitle>Int J CARS</stitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><date>2023-06-01</date><risdate>2023</risdate><volume>18</volume><issue>6</issue><spage>971</spage><epage>979</epage><pages>971-979</pages><issn>1861-6429</issn><issn>1861-6410</issn><eissn>1861-6429</eissn><abstract>Purpose During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous MR sequences could alleviate timing constraints. Methods We propose a new multimodal MR synthesis approach leveraging a combination of MR modalities presenting glioblastomas to generate an additional modality. The proposed learning approach relies on a least square GAN (LSGAN) using an unsupervised contrastive learning strategy. We incorporate a contrastive encoder, which extracts an invariant contrastive representation from augmented pairs of the generated and real target MR contrasts. This contrastive representation describes a pair of features for each input channel, allowing to regularize the generator to be invariant to the high-frequency orientations. Moreover, when training the generator, we impose on the LSGAN loss another term reformulated as the combination of a reconstruction and a novel perception loss based on a pair of features. Results When compared to other multimodal MR synthesis approaches evaluated on the BraTS’18 brain dataset, the model yields the highest Dice score with 0.748 ± 0.04 and achieves the lowest variability information of 2.1 ± 1.11 , with a probability rand index score of 0.84 ± 0.03 and a global consistency error of 0.17 ± 0.04 . Conclusion The proposed model allows to generate reliable MR contrasts with enhanced tumors on the synthesized image using a brain tumor dataset (BraTS’18). In future work, we will perform a clinical evaluation of residual tumor segmentations during MR-guided neurosurgeries, where limited MR contrasts will be acquired during the procedure.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>37103727</pmid><doi>10.1007/s11548-023-02916-z</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-3845-5361</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1861-6429
ispartof International journal for computer assisted radiology and surgery, 2023-06, Vol.18 (6), p.971-979
issn 1861-6429
1861-6410
1861-6429
language eng
recordid cdi_proquest_miscellaneous_2806995751
source SpringerNature Journals
subjects Brain
Coders
Computer Imaging
Computer Science
Datasets
Health Informatics
Image enhancement
Imaging
Invariants
Learning
Least squares
Medicine
Medicine & Public Health
Original Article
Pattern Recognition and Graphics
Radiology
Representations
Surgery
Synthesis
Tumors
Vision
title A least square generative network based on invariant contrastive feature pair learning for multimodal MR image synthesis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T20%3A47%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20least%20square%20generative%20network%20based%20on%20invariant%20contrastive%20feature%20pair%20learning%20for%20multimodal%20MR%20image%20synthesis&rft.jtitle=International%20journal%20for%20computer%20assisted%20radiology%20and%20surgery&rft.au=Touati,%20Redha&rft.date=2023-06-01&rft.volume=18&rft.issue=6&rft.spage=971&rft.epage=979&rft.pages=971-979&rft.issn=1861-6429&rft.eissn=1861-6429&rft_id=info:doi/10.1007/s11548-023-02916-z&rft_dat=%3Cproquest_cross%3E2806995751%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2828399847&rft_id=info:pmid/37103727&rfr_iscdi=true