Completed Feature Disentanglement Learning for Multimodal MRIs Analysis

Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Liu, Tianling, Liu, Hongying, Shang, Fanhua, Yu, Lequan, Han, Tong, Liang, Wan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Tianling
Liu, Hongying
Shang, Fanhua
Yu, Lequan
Han, Tong
Liang, Wan
description Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3077524565</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3077524565</sourcerecordid><originalsourceid>FETCH-proquest_journals_30775245653</originalsourceid><addsrcrecordid>eNqNykELgjAYgOERBEn5HwadhbU57RqWFeQlusvAT5nMzfZth_59HvoBnd738KxIwoU4ZMec8w1JEUfGGC9KLqVIyLVy02wgQEdrUCF6oGeNYIOyg4FpGfoA5a22A-2dp000QU-uU4Y2zzvSk1Xmgxp3ZN0rg5D-uiX7-vKqbtns3TsChnZ00S8YW8HKUvJcFlL8p74RQDu9</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3077524565</pqid></control><display><type>article</type><title>Completed Feature Disentanglement Learning for Multimodal MRIs Analysis</title><source>Open Access: Freely Accessible Journals by multiple vendors</source><creator>Liu, Tianling ; Liu, Hongying ; Shang, Fanhua ; Yu, Lequan ; Han, Tong ; Liang, Wan</creator><creatorcontrib>Liu, Tianling ; Liu, Hongying ; Shang, Fanhua ; Yu, Lequan ; Han, Tong ; Liang, Wan</creatorcontrib><description>Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Data analysis ; Decoupling</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Liu, Tianling</creatorcontrib><creatorcontrib>Liu, Hongying</creatorcontrib><creatorcontrib>Shang, Fanhua</creatorcontrib><creatorcontrib>Yu, Lequan</creatorcontrib><creatorcontrib>Han, Tong</creatorcontrib><creatorcontrib>Liang, Wan</creatorcontrib><title>Completed Feature Disentanglement Learning for Multimodal MRIs Analysis</title><title>arXiv.org</title><description>Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.</description><subject>Data analysis</subject><subject>Decoupling</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNykELgjAYgOERBEn5HwadhbU57RqWFeQlusvAT5nMzfZth_59HvoBnd738KxIwoU4ZMec8w1JEUfGGC9KLqVIyLVy02wgQEdrUCF6oGeNYIOyg4FpGfoA5a22A-2dp000QU-uU4Y2zzvSk1Xmgxp3ZN0rg5D-uiX7-vKqbtns3TsChnZ00S8YW8HKUvJcFlL8p74RQDu9</recordid><startdate>20240706</startdate><enddate>20240706</enddate><creator>Liu, Tianling</creator><creator>Liu, Hongying</creator><creator>Shang, Fanhua</creator><creator>Yu, Lequan</creator><creator>Han, Tong</creator><creator>Liang, Wan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240706</creationdate><title>Completed Feature Disentanglement Learning for Multimodal MRIs Analysis</title><author>Liu, Tianling ; Liu, Hongying ; Shang, Fanhua ; Yu, Lequan ; Han, Tong ; Liang, Wan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30775245653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Data analysis</topic><topic>Decoupling</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Tianling</creatorcontrib><creatorcontrib>Liu, Hongying</creatorcontrib><creatorcontrib>Shang, Fanhua</creatorcontrib><creatorcontrib>Yu, Lequan</creatorcontrib><creatorcontrib>Han, Tong</creatorcontrib><creatorcontrib>Liang, Wan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Tianling</au><au>Liu, Hongying</au><au>Shang, Fanhua</au><au>Yu, Lequan</au><au>Han, Tong</au><au>Liang, Wan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Completed Feature Disentanglement Learning for Multimodal MRIs Analysis</atitle><jtitle>arXiv.org</jtitle><date>2024-07-06</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_3077524565
source Open Access: Freely Accessible Journals by multiple vendors
subjects Data analysis
Decoupling
title Completed Feature Disentanglement Learning for Multimodal MRIs Analysis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T10%3A46%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Completed%20Feature%20Disentanglement%20Learning%20for%20Multimodal%20MRIs%20Analysis&rft.jtitle=arXiv.org&rft.au=Liu,%20Tianling&rft.date=2024-07-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3077524565%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3077524565&rft_id=info:pmid/&rfr_iscdi=true