Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks

Simultaneous functional PET/MR (sf-PET/MR) presents a cutting-edge multimodal neuroimaging technique. It provides an unprecedented opportunity for concurrently monitoring and integrating multifaceted brain networks built by spatiotemporally covaried metabolic activity, neural activity, and cerebral...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-09
Hauptverfasser: Wang, Luoyu, Tao, Yitian, Yang, Qing, Liang, Yan, Liu, Siwei, Shi, Hongcheng, Shen, Dinggang, Zhang, Han
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wang, Luoyu
Tao, Yitian
Yang, Qing
Liang, Yan
Liu, Siwei
Shi, Hongcheng
Shen, Dinggang
Zhang, Han
description Simultaneous functional PET/MR (sf-PET/MR) presents a cutting-edge multimodal neuroimaging technique. It provides an unprecedented opportunity for concurrently monitoring and integrating multifaceted brain networks built by spatiotemporally covaried metabolic activity, neural activity, and cerebral blood flow (perfusion). Albeit high scientific/clinical values, short in hardware accessibility of PET/MR hinders its applications, let alone modern AI-based PET/MR fusion models. Our objective is to develop a clinically feasible AI-based disease diagnosis model trained on comprehensive sf-PET/MR data with the power of, during inferencing, allowing single modality input (e.g., PET only) as well as enforcing multimodal-based accuracy. To this end, we propose MX-ARM, a multimodal MiXture-of-experts Alignment and Reconstruction Model. It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities. Such design will not sacrifice model performance in uni-modal situation. To fully exploit the inherent complex and nonlinear relation among modalities while producing fine-grained representations for uni-modal inference, we subsequently add a modal alignment module to line up a dominant modality (e.g., PET) with representations of auxiliary modalities (MR). We further adopt multimodal reconstruction to promote the quality of learned features. Experiments on precious multimodal sf-PET/MR data for Mild Cognitive Impairment diagnosis showcase the efficacy of our model toward clinically feasible precision medicine.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3028031927</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3028031927</sourcerecordid><originalsourceid>FETCH-proquest_journals_30280319273</originalsourceid><addsrcrecordid>eNqNjcFqAjEURYNQUFr_4UG3SmNSq26tFruwiLiXp_NmGpt5sXlJxX5FP7kz0A_o6nDhnns7qmesHQ2nj8Z0VV_kpLU2TxMzHtue-tnSV_A5ucDu23EFCyeEQg2x4iBO4OLSO4irs0_IFLJAmfnYGuhhs9w9rLeAXMCC6Oyv8MqJqoiJCphHdAxrSngI3h0HsKI6FFfGug2ts6FYZmmm4I3SJcQPuVM3JXqh_h9v1f3Lcve8Gp5j-MwkaX8KOTbXsrfaTLUdzczE_q_1CyxIVus</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3028031927</pqid></control><display><type>article</type><title>Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks</title><source>Freely Accessible Journals</source><creator>Wang, Luoyu ; Tao, Yitian ; Yang, Qing ; Liang, Yan ; Liu, Siwei ; Shi, Hongcheng ; Shen, Dinggang ; Zhang, Han</creator><creatorcontrib>Wang, Luoyu ; Tao, Yitian ; Yang, Qing ; Liang, Yan ; Liu, Siwei ; Shi, Hongcheng ; Shen, Dinggang ; Zhang, Han</creatorcontrib><description>Simultaneous functional PET/MR (sf-PET/MR) presents a cutting-edge multimodal neuroimaging technique. It provides an unprecedented opportunity for concurrently monitoring and integrating multifaceted brain networks built by spatiotemporally covaried metabolic activity, neural activity, and cerebral blood flow (perfusion). Albeit high scientific/clinical values, short in hardware accessibility of PET/MR hinders its applications, let alone modern AI-based PET/MR fusion models. Our objective is to develop a clinically feasible AI-based disease diagnosis model trained on comprehensive sf-PET/MR data with the power of, during inferencing, allowing single modality input (e.g., PET only) as well as enforcing multimodal-based accuracy. To this end, we propose MX-ARM, a multimodal MiXture-of-experts Alignment and Reconstruction Model. It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities. Such design will not sacrifice model performance in uni-modal situation. To fully exploit the inherent complex and nonlinear relation among modalities while producing fine-grained representations for uni-modal inference, we subsequently add a modal alignment module to line up a dominant modality (e.g., PET) with representations of auxiliary modalities (MR). We further adopt multimodal reconstruction to promote the quality of learned features. Experiments on precious multimodal sf-PET/MR data for Mild Cognitive Impairment diagnosis showcase the efficacy of our model toward clinically feasible precision medicine.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment ; Blood flow ; Brain ; Diagnosis ; Hemodynamics ; Medical diagnosis ; Medical imaging ; Metabolism ; Mixtures ; Multilayer perceptrons ; Multilayers ; Reconstruction ; Representations</subject><ispartof>arXiv.org, 2024-09</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Wang, Luoyu</creatorcontrib><creatorcontrib>Tao, Yitian</creatorcontrib><creatorcontrib>Yang, Qing</creatorcontrib><creatorcontrib>Liang, Yan</creatorcontrib><creatorcontrib>Liu, Siwei</creatorcontrib><creatorcontrib>Shi, Hongcheng</creatorcontrib><creatorcontrib>Shen, Dinggang</creatorcontrib><creatorcontrib>Zhang, Han</creatorcontrib><title>Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks</title><title>arXiv.org</title><description>Simultaneous functional PET/MR (sf-PET/MR) presents a cutting-edge multimodal neuroimaging technique. It provides an unprecedented opportunity for concurrently monitoring and integrating multifaceted brain networks built by spatiotemporally covaried metabolic activity, neural activity, and cerebral blood flow (perfusion). Albeit high scientific/clinical values, short in hardware accessibility of PET/MR hinders its applications, let alone modern AI-based PET/MR fusion models. Our objective is to develop a clinically feasible AI-based disease diagnosis model trained on comprehensive sf-PET/MR data with the power of, during inferencing, allowing single modality input (e.g., PET only) as well as enforcing multimodal-based accuracy. To this end, we propose MX-ARM, a multimodal MiXture-of-experts Alignment and Reconstruction Model. It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities. Such design will not sacrifice model performance in uni-modal situation. To fully exploit the inherent complex and nonlinear relation among modalities while producing fine-grained representations for uni-modal inference, we subsequently add a modal alignment module to line up a dominant modality (e.g., PET) with representations of auxiliary modalities (MR). We further adopt multimodal reconstruction to promote the quality of learned features. Experiments on precious multimodal sf-PET/MR data for Mild Cognitive Impairment diagnosis showcase the efficacy of our model toward clinically feasible precision medicine.</description><subject>Alignment</subject><subject>Blood flow</subject><subject>Brain</subject><subject>Diagnosis</subject><subject>Hemodynamics</subject><subject>Medical diagnosis</subject><subject>Medical imaging</subject><subject>Metabolism</subject><subject>Mixtures</subject><subject>Multilayer perceptrons</subject><subject>Multilayers</subject><subject>Reconstruction</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjcFqAjEURYNQUFr_4UG3SmNSq26tFruwiLiXp_NmGpt5sXlJxX5FP7kz0A_o6nDhnns7qmesHQ2nj8Z0VV_kpLU2TxMzHtue-tnSV_A5ucDu23EFCyeEQg2x4iBO4OLSO4irs0_IFLJAmfnYGuhhs9w9rLeAXMCC6Oyv8MqJqoiJCphHdAxrSngI3h0HsKI6FFfGug2ts6FYZmmm4I3SJcQPuVM3JXqh_h9v1f3Lcve8Gp5j-MwkaX8KOTbXsrfaTLUdzczE_q_1CyxIVus</recordid><startdate>20240920</startdate><enddate>20240920</enddate><creator>Wang, Luoyu</creator><creator>Tao, Yitian</creator><creator>Yang, Qing</creator><creator>Liang, Yan</creator><creator>Liu, Siwei</creator><creator>Shi, Hongcheng</creator><creator>Shen, Dinggang</creator><creator>Zhang, Han</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240920</creationdate><title>Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks</title><author>Wang, Luoyu ; Tao, Yitian ; Yang, Qing ; Liang, Yan ; Liu, Siwei ; Shi, Hongcheng ; Shen, Dinggang ; Zhang, Han</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30280319273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Alignment</topic><topic>Blood flow</topic><topic>Brain</topic><topic>Diagnosis</topic><topic>Hemodynamics</topic><topic>Medical diagnosis</topic><topic>Medical imaging</topic><topic>Metabolism</topic><topic>Mixtures</topic><topic>Multilayer perceptrons</topic><topic>Multilayers</topic><topic>Reconstruction</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Luoyu</creatorcontrib><creatorcontrib>Tao, Yitian</creatorcontrib><creatorcontrib>Yang, Qing</creatorcontrib><creatorcontrib>Liang, Yan</creatorcontrib><creatorcontrib>Liu, Siwei</creatorcontrib><creatorcontrib>Shi, Hongcheng</creatorcontrib><creatorcontrib>Shen, Dinggang</creatorcontrib><creatorcontrib>Zhang, Han</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Luoyu</au><au>Tao, Yitian</au><au>Yang, Qing</au><au>Liang, Yan</au><au>Liu, Siwei</au><au>Shi, Hongcheng</au><au>Shen, Dinggang</au><au>Zhang, Han</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks</atitle><jtitle>arXiv.org</jtitle><date>2024-09-20</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Simultaneous functional PET/MR (sf-PET/MR) presents a cutting-edge multimodal neuroimaging technique. It provides an unprecedented opportunity for concurrently monitoring and integrating multifaceted brain networks built by spatiotemporally covaried metabolic activity, neural activity, and cerebral blood flow (perfusion). Albeit high scientific/clinical values, short in hardware accessibility of PET/MR hinders its applications, let alone modern AI-based PET/MR fusion models. Our objective is to develop a clinically feasible AI-based disease diagnosis model trained on comprehensive sf-PET/MR data with the power of, during inferencing, allowing single modality input (e.g., PET only) as well as enforcing multimodal-based accuracy. To this end, we propose MX-ARM, a multimodal MiXture-of-experts Alignment and Reconstruction Model. It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities. Such design will not sacrifice model performance in uni-modal situation. To fully exploit the inherent complex and nonlinear relation among modalities while producing fine-grained representations for uni-modal inference, we subsequently add a modal alignment module to line up a dominant modality (e.g., PET) with representations of auxiliary modalities (MR). We further adopt multimodal reconstruction to promote the quality of learned features. Experiments on precious multimodal sf-PET/MR data for Mild Cognitive Impairment diagnosis showcase the efficacy of our model toward clinically feasible precision medicine.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_3028031927
source Freely Accessible Journals
subjects Alignment
Blood flow
Brain
Diagnosis
Hemodynamics
Medical diagnosis
Medical imaging
Metabolism
Mixtures
Multilayer perceptrons
Multilayers
Reconstruction
Representations
title Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T17%3A59%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Revolutionizing%20Disease%20Diagnosis%20with%20simultaneous%20functional%20PET/MR%20and%20Deeply%20Integrated%20Brain%20Metabolic,%20Hemodynamic,%20and%20Perfusion%20Networks&rft.jtitle=arXiv.org&rft.au=Wang,%20Luoyu&rft.date=2024-09-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3028031927%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3028031927&rft_id=info:pmid/&rfr_iscdi=true