Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction

Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on medical imaging 2024-01, Vol.43 (1), p.558-569
Hauptverfasser: Sun, Kaicong, Wang, Qian, Shen, Dinggang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 569
container_issue 1
container_start_page 558
container_title IEEE transactions on medical imaging
container_volume 43
creator Sun, Kaicong
Wang, Qian
Shen, Dinggang
description Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN .
doi_str_mv 10.1109/TMI.2023.3314008
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2909271816</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10247018</ieee_id><sourcerecordid>2909271816</sourcerecordid><originalsourceid>FETCH-LOGICAL-c348t-264bc5e286b913dc60b4970100230f4f05e6ddec29021f7816ff9642051960213</originalsourceid><addsrcrecordid>eNpdkM9LwzAUgIMoOn_cPYgEvHjpfEmTtDnKdDpxKkPRW-jaV6xuzUxSxP_ejE0RDyEQvveR9xFyyKDPGOizx_Goz4Gn_TRlAiDfID0mZZ5wKV42SQ94licAiu-QXe_fAJiQoLfJTpopLbVSPTK5sU0b6MBZ75PzELANjW3pHYZP697pcxNe6QXigo5tVcya8EUfXGMdreMZFj7Q8WREJ1ja1gfXlcvhfbJVFzOPB-t7jzwNLx8H18nt_dVocH6blKnIQ8KVmJYSea6mmqVVqWAqdAYM4j5QixokqqrCkmvgrM5ypupaK8FBMq3iU7pHTlfehbMfHfpg5o0vcTYrWrSdN9EsmMyEEBE9-Ye-2c618Xcm6jXPWNRHClZUuazhsDYL18wL92UYmGVvE3ubZW-z7h1HjtfibjrH6nfgJ3AEjlZAg4h_fFzEVfP0G8B-gWA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2909271816</pqid></control><display><type>article</type><title>Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Kaicong ; Wang, Qian ; Shen, Dinggang</creator><creatorcontrib>Sun, Kaicong ; Wang, Qian ; Shen, Dinggang</creatorcontrib><description>Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN .</description><identifier>ISSN: 0278-0062</identifier><identifier>ISSN: 1558-254X</identifier><identifier>EISSN: 1558-254X</identifier><identifier>DOI: 10.1109/TMI.2023.3314008</identifier><identifier>PMID: 37695966</identifier><identifier>CODEN: ITMID4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial neural networks ; Brain - diagnostic imaging ; Coils ; Convolutional neural networks ; cross-attention ; Data acquisition ; Deep learning ; deep modality prior ; Estimation ; Fast MRI reconstruction ; Image acquisition ; Image Processing, Computer-Assisted ; Image reconstruction ; Iterative methods ; Knee Joint ; Machine learning ; Magnetic Resonance Imaging ; Medical imaging ; multi-coil sensitivity estimation ; multi-modal fusion ; Neural networks ; Neural Networks, Computer ; Neuroimaging ; Normal Distribution ; Optimization ; Performance enhancement ; Sensitivity ; Sensitivity analysis</subject><ispartof>IEEE transactions on medical imaging, 2024-01, Vol.43 (1), p.558-569</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c348t-264bc5e286b913dc60b4970100230f4f05e6ddec29021f7816ff9642051960213</citedby><cites>FETCH-LOGICAL-c348t-264bc5e286b913dc60b4970100230f4f05e6ddec29021f7816ff9642051960213</cites><orcidid>0000-0002-3490-3836 ; 0000-0002-9999-2542 ; 0000-0002-7934-5698</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10247018$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10247018$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37695966$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Kaicong</creatorcontrib><creatorcontrib>Wang, Qian</creatorcontrib><creatorcontrib>Shen, Dinggang</creatorcontrib><title>Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction</title><title>IEEE transactions on medical imaging</title><addtitle>TMI</addtitle><addtitle>IEEE Trans Med Imaging</addtitle><description>Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN .</description><subject>Artificial neural networks</subject><subject>Brain - diagnostic imaging</subject><subject>Coils</subject><subject>Convolutional neural networks</subject><subject>cross-attention</subject><subject>Data acquisition</subject><subject>Deep learning</subject><subject>deep modality prior</subject><subject>Estimation</subject><subject>Fast MRI reconstruction</subject><subject>Image acquisition</subject><subject>Image Processing, Computer-Assisted</subject><subject>Image reconstruction</subject><subject>Iterative methods</subject><subject>Knee Joint</subject><subject>Machine learning</subject><subject>Magnetic Resonance Imaging</subject><subject>Medical imaging</subject><subject>multi-coil sensitivity estimation</subject><subject>multi-modal fusion</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Neuroimaging</subject><subject>Normal Distribution</subject><subject>Optimization</subject><subject>Performance enhancement</subject><subject>Sensitivity</subject><subject>Sensitivity analysis</subject><issn>0278-0062</issn><issn>1558-254X</issn><issn>1558-254X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkM9LwzAUgIMoOn_cPYgEvHjpfEmTtDnKdDpxKkPRW-jaV6xuzUxSxP_ejE0RDyEQvveR9xFyyKDPGOizx_Goz4Gn_TRlAiDfID0mZZ5wKV42SQ94licAiu-QXe_fAJiQoLfJTpopLbVSPTK5sU0b6MBZ75PzELANjW3pHYZP697pcxNe6QXigo5tVcya8EUfXGMdreMZFj7Q8WREJ1ja1gfXlcvhfbJVFzOPB-t7jzwNLx8H18nt_dVocH6blKnIQ8KVmJYSea6mmqVVqWAqdAYM4j5QixokqqrCkmvgrM5ypupaK8FBMq3iU7pHTlfehbMfHfpg5o0vcTYrWrSdN9EsmMyEEBE9-Ye-2c618Xcm6jXPWNRHClZUuazhsDYL18wL92UYmGVvE3ubZW-z7h1HjtfibjrH6nfgJ3AEjlZAg4h_fFzEVfP0G8B-gWA</recordid><startdate>202401</startdate><enddate>202401</enddate><creator>Sun, Kaicong</creator><creator>Wang, Qian</creator><creator>Shen, Dinggang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3490-3836</orcidid><orcidid>https://orcid.org/0000-0002-9999-2542</orcidid><orcidid>https://orcid.org/0000-0002-7934-5698</orcidid></search><sort><creationdate>202401</creationdate><title>Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction</title><author>Sun, Kaicong ; Wang, Qian ; Shen, Dinggang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c348t-264bc5e286b913dc60b4970100230f4f05e6ddec29021f7816ff9642051960213</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Brain - diagnostic imaging</topic><topic>Coils</topic><topic>Convolutional neural networks</topic><topic>cross-attention</topic><topic>Data acquisition</topic><topic>Deep learning</topic><topic>deep modality prior</topic><topic>Estimation</topic><topic>Fast MRI reconstruction</topic><topic>Image acquisition</topic><topic>Image Processing, Computer-Assisted</topic><topic>Image reconstruction</topic><topic>Iterative methods</topic><topic>Knee Joint</topic><topic>Machine learning</topic><topic>Magnetic Resonance Imaging</topic><topic>Medical imaging</topic><topic>multi-coil sensitivity estimation</topic><topic>multi-modal fusion</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Neuroimaging</topic><topic>Normal Distribution</topic><topic>Optimization</topic><topic>Performance enhancement</topic><topic>Sensitivity</topic><topic>Sensitivity analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Kaicong</creatorcontrib><creatorcontrib>Wang, Qian</creatorcontrib><creatorcontrib>Shen, Dinggang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on medical imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Kaicong</au><au>Wang, Qian</au><au>Shen, Dinggang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction</atitle><jtitle>IEEE transactions on medical imaging</jtitle><stitle>TMI</stitle><addtitle>IEEE Trans Med Imaging</addtitle><date>2024-01</date><risdate>2024</risdate><volume>43</volume><issue>1</issue><spage>558</spage><epage>569</epage><pages>558-569</pages><issn>0278-0062</issn><issn>1558-254X</issn><eissn>1558-254X</eissn><coden>ITMID4</coden><abstract>Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN .</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37695966</pmid><doi>10.1109/TMI.2023.3314008</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-3490-3836</orcidid><orcidid>https://orcid.org/0000-0002-9999-2542</orcidid><orcidid>https://orcid.org/0000-0002-7934-5698</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0278-0062
ispartof IEEE transactions on medical imaging, 2024-01, Vol.43 (1), p.558-569
issn 0278-0062
1558-254X
1558-254X
language eng
recordid cdi_proquest_journals_2909271816
source IEEE Electronic Library (IEL)
subjects Artificial neural networks
Brain - diagnostic imaging
Coils
Convolutional neural networks
cross-attention
Data acquisition
Deep learning
deep modality prior
Estimation
Fast MRI reconstruction
Image acquisition
Image Processing, Computer-Assisted
Image reconstruction
Iterative methods
Knee Joint
Machine learning
Magnetic Resonance Imaging
Medical imaging
multi-coil sensitivity estimation
multi-modal fusion
Neural networks
Neural Networks, Computer
Neuroimaging
Normal Distribution
Optimization
Performance enhancement
Sensitivity
Sensitivity analysis
title Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T20%3A21%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20Cross-Attention%20Network%20With%20Deep%20Modality%20Prior%20for%20Fast%20MRI%20Reconstruction&rft.jtitle=IEEE%20transactions%20on%20medical%20imaging&rft.au=Sun,%20Kaicong&rft.date=2024-01&rft.volume=43&rft.issue=1&rft.spage=558&rft.epage=569&rft.pages=558-569&rft.issn=0278-0062&rft.eissn=1558-254X&rft.coden=ITMID4&rft_id=info:doi/10.1109/TMI.2023.3314008&rft_dat=%3Cproquest_RIE%3E2909271816%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2909271816&rft_id=info:pmid/37695966&rft_ieee_id=10247018&rfr_iscdi=true