Multi-Loss Disentangled Generative-Discriminative Learning for Multimodal Representation in Schizophrenia
Schizophrenia (SCZ) is a multifactorial mental illness, thus it will be beneficial for exploring this disease using multimodal data, including functional magnetic resonance imaging (fMRI), genes, and the gut microbiome. Previous studies reported combining multimodal data can offer complementary info...
Gespeichert in:
Veröffentlicht in: | IEEE journal of biomedical and health informatics 2024-11, Vol.28 (11), p.6395-6404 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6404 |
---|---|
container_issue | 11 |
container_start_page | 6395 |
container_title | IEEE journal of biomedical and health informatics |
container_volume | 28 |
creator | Song, Peilun Yuan, Xiuxia Li, Xue Song, Xueqin Wang, Yaping |
description | Schizophrenia (SCZ) is a multifactorial mental illness, thus it will be beneficial for exploring this disease using multimodal data, including functional magnetic resonance imaging (fMRI), genes, and the gut microbiome. Previous studies reported combining multimodal data can offer complementary information for better depicting the abnormalities of SCZ. However, the existing multimodal-based methods have multiple limitations. First, most approaches cannot fully use the relationships among different modalities for the downstream tasks. Second, representing multimodal data by the modality-common and modality-specific components can improve the performance of multimodal analysis but often be ignored. Third, most methods conduct the model for classification or regression, thus a unified model is needed for finishing these tasks simultaneously. To this end, a multi-loss disentangled generative-discriminative learning (MDGDL) model was developed to tackle these issues. Specifically, using disentangled learning method, the genes and gut microbial biomarkers were represented and separated into two modality-specific vectors and one modality-common vector. Then, a generative-discriminative framework was introduced to uncover the relationships between fMRI features and these three latent vectors, further producing the attentive vectors, which can help fMRI features for the downstream tasks. To validate the performance of MDGDL, an SCZ classification task and a cognitive score regression task were conducted. Results showed the MDGDL achieved superior performance and identified the most important multimodal biomarkers for the SCZ. Our proposed model could be a supplementary approach for multimodal data analysis. Based on this method, we could analyze the SCZ by combining multimodal data, and further obtain some interesting findings. |
doi_str_mv | 10.1109/JBHI.2023.3337661 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10366809</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10366809</ieee_id><sourcerecordid>2904573090</sourcerecordid><originalsourceid>FETCH-LOGICAL-c274t-7bfa4fbf63a33af396dfea128c86283bfb3b92fe162f91917d84c4ac71a260593</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EolXpByAhlCWbFD9Sx15CgbYoCInHOnKScWuUOMFJkODrcV-I2Xg8uvdq5iB0TvCEECyvH28XywnFlE0YYzHn5AgNKeEipBSL40NPZDRA47b9wL6EH0l-igZMEBJziofIPPVlZ8KkbtvgzrRgO2VXJRTBHCw41ZkvCP08d6YydvsNElDOGrsKdO2Crb2qC1UGL9A42CZ0praBscFrvjY_dbN2YI06QydalS2M9-8IvT_cv80WYfI8X85ukjCncdSFcaZVpDPNmWJMaSZ5oUERKnLBqWCZzlgmqQbCqZZEkrgQUR6pPCaKcjyVbISudrmNqz97aLu08vtDWSoLdd-mVOJoGjMssZeSnTR3_n4HOm38ncp9pwSnG8jpBnK6gZzuIXvP5T6-zyoo_hwHpF5wsRMYAPgXyDgXWLJfLRqCaQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2904573090</pqid></control><display><type>article</type><title>Multi-Loss Disentangled Generative-Discriminative Learning for Multimodal Representation in Schizophrenia</title><source>IEEE Electronic Library (IEL)</source><creator>Song, Peilun ; Yuan, Xiuxia ; Li, Xue ; Song, Xueqin ; Wang, Yaping</creator><creatorcontrib>Song, Peilun ; Yuan, Xiuxia ; Li, Xue ; Song, Xueqin ; Wang, Yaping</creatorcontrib><description>Schizophrenia (SCZ) is a multifactorial mental illness, thus it will be beneficial for exploring this disease using multimodal data, including functional magnetic resonance imaging (fMRI), genes, and the gut microbiome. Previous studies reported combining multimodal data can offer complementary information for better depicting the abnormalities of SCZ. However, the existing multimodal-based methods have multiple limitations. First, most approaches cannot fully use the relationships among different modalities for the downstream tasks. Second, representing multimodal data by the modality-common and modality-specific components can improve the performance of multimodal analysis but often be ignored. Third, most methods conduct the model for classification or regression, thus a unified model is needed for finishing these tasks simultaneously. To this end, a multi-loss disentangled generative-discriminative learning (MDGDL) model was developed to tackle these issues. Specifically, using disentangled learning method, the genes and gut microbial biomarkers were represented and separated into two modality-specific vectors and one modality-common vector. Then, a generative-discriminative framework was introduced to uncover the relationships between fMRI features and these three latent vectors, further producing the attentive vectors, which can help fMRI features for the downstream tasks. To validate the performance of MDGDL, an SCZ classification task and a cognitive score regression task were conducted. Results showed the MDGDL achieved superior performance and identified the most important multimodal biomarkers for the SCZ. Our proposed model could be a supplementary approach for multimodal data analysis. Based on this method, we could analyze the SCZ by combining multimodal data, and further obtain some interesting findings.</description><identifier>ISSN: 2168-2194</identifier><identifier>ISSN: 2168-2208</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2023.3337661</identifier><identifier>PMID: 38117620</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adult ; Algorithms ; Analytical models ; Biological system modeling ; Biomarkers ; Brain - diagnostic imaging ; Brain modeling ; Data models ; Deep learning ; Female ; Functional magnetic resonance imaging ; Gastrointestinal Microbiome - genetics ; Gastrointestinal Microbiome - physiology ; gene ; gut microbiome ; Humans ; Image Interpretation, Computer-Assisted - methods ; Machine Learning ; Magnetic Resonance Imaging - methods ; Male ; Multimodal data analysis ; Multimodal Imaging - methods ; Schizophrenia - diagnostic imaging ; Schizophrenia - genetics ; Schizophrenia - physiopathology ; Young Adult</subject><ispartof>IEEE journal of biomedical and health informatics, 2024-11, Vol.28 (11), p.6395-6404</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c274t-7bfa4fbf63a33af396dfea128c86283bfb3b92fe162f91917d84c4ac71a260593</cites><orcidid>0009-0003-3116-581X ; 0000-0001-8124-3052 ; 0000-0002-8223-2432 ; 0000-0001-9830-7155 ; 0009-0007-8011-3628</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10366809$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10366809$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38117620$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Song, Peilun</creatorcontrib><creatorcontrib>Yuan, Xiuxia</creatorcontrib><creatorcontrib>Li, Xue</creatorcontrib><creatorcontrib>Song, Xueqin</creatorcontrib><creatorcontrib>Wang, Yaping</creatorcontrib><title>Multi-Loss Disentangled Generative-Discriminative Learning for Multimodal Representation in Schizophrenia</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description>Schizophrenia (SCZ) is a multifactorial mental illness, thus it will be beneficial for exploring this disease using multimodal data, including functional magnetic resonance imaging (fMRI), genes, and the gut microbiome. Previous studies reported combining multimodal data can offer complementary information for better depicting the abnormalities of SCZ. However, the existing multimodal-based methods have multiple limitations. First, most approaches cannot fully use the relationships among different modalities for the downstream tasks. Second, representing multimodal data by the modality-common and modality-specific components can improve the performance of multimodal analysis but often be ignored. Third, most methods conduct the model for classification or regression, thus a unified model is needed for finishing these tasks simultaneously. To this end, a multi-loss disentangled generative-discriminative learning (MDGDL) model was developed to tackle these issues. Specifically, using disentangled learning method, the genes and gut microbial biomarkers were represented and separated into two modality-specific vectors and one modality-common vector. Then, a generative-discriminative framework was introduced to uncover the relationships between fMRI features and these three latent vectors, further producing the attentive vectors, which can help fMRI features for the downstream tasks. To validate the performance of MDGDL, an SCZ classification task and a cognitive score regression task were conducted. Results showed the MDGDL achieved superior performance and identified the most important multimodal biomarkers for the SCZ. Our proposed model could be a supplementary approach for multimodal data analysis. Based on this method, we could analyze the SCZ by combining multimodal data, and further obtain some interesting findings.</description><subject>Adult</subject><subject>Algorithms</subject><subject>Analytical models</subject><subject>Biological system modeling</subject><subject>Biomarkers</subject><subject>Brain - diagnostic imaging</subject><subject>Brain modeling</subject><subject>Data models</subject><subject>Deep learning</subject><subject>Female</subject><subject>Functional magnetic resonance imaging</subject><subject>Gastrointestinal Microbiome - genetics</subject><subject>Gastrointestinal Microbiome - physiology</subject><subject>gene</subject><subject>gut microbiome</subject><subject>Humans</subject><subject>Image Interpretation, Computer-Assisted - methods</subject><subject>Machine Learning</subject><subject>Magnetic Resonance Imaging - methods</subject><subject>Male</subject><subject>Multimodal data analysis</subject><subject>Multimodal Imaging - methods</subject><subject>Schizophrenia - diagnostic imaging</subject><subject>Schizophrenia - genetics</subject><subject>Schizophrenia - physiopathology</subject><subject>Young Adult</subject><issn>2168-2194</issn><issn>2168-2208</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpNkMtOwzAQRS0EolXpByAhlCWbFD9Sx15CgbYoCInHOnKScWuUOMFJkODrcV-I2Xg8uvdq5iB0TvCEECyvH28XywnFlE0YYzHn5AgNKeEipBSL40NPZDRA47b9wL6EH0l-igZMEBJziofIPPVlZ8KkbtvgzrRgO2VXJRTBHCw41ZkvCP08d6YydvsNElDOGrsKdO2Crb2qC1UGL9A42CZ0praBscFrvjY_dbN2YI06QydalS2M9-8IvT_cv80WYfI8X85ukjCncdSFcaZVpDPNmWJMaSZ5oUERKnLBqWCZzlgmqQbCqZZEkrgQUR6pPCaKcjyVbISudrmNqz97aLu08vtDWSoLdd-mVOJoGjMssZeSnTR3_n4HOm38ncp9pwSnG8jpBnK6gZzuIXvP5T6-zyoo_hwHpF5wsRMYAPgXyDgXWLJfLRqCaQ</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Song, Peilun</creator><creator>Yuan, Xiuxia</creator><creator>Li, Xue</creator><creator>Song, Xueqin</creator><creator>Wang, Yaping</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0009-0003-3116-581X</orcidid><orcidid>https://orcid.org/0000-0001-8124-3052</orcidid><orcidid>https://orcid.org/0000-0002-8223-2432</orcidid><orcidid>https://orcid.org/0000-0001-9830-7155</orcidid><orcidid>https://orcid.org/0009-0007-8011-3628</orcidid></search><sort><creationdate>20241101</creationdate><title>Multi-Loss Disentangled Generative-Discriminative Learning for Multimodal Representation in Schizophrenia</title><author>Song, Peilun ; Yuan, Xiuxia ; Li, Xue ; Song, Xueqin ; Wang, Yaping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c274t-7bfa4fbf63a33af396dfea128c86283bfb3b92fe162f91917d84c4ac71a260593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adult</topic><topic>Algorithms</topic><topic>Analytical models</topic><topic>Biological system modeling</topic><topic>Biomarkers</topic><topic>Brain - diagnostic imaging</topic><topic>Brain modeling</topic><topic>Data models</topic><topic>Deep learning</topic><topic>Female</topic><topic>Functional magnetic resonance imaging</topic><topic>Gastrointestinal Microbiome - genetics</topic><topic>Gastrointestinal Microbiome - physiology</topic><topic>gene</topic><topic>gut microbiome</topic><topic>Humans</topic><topic>Image Interpretation, Computer-Assisted - methods</topic><topic>Machine Learning</topic><topic>Magnetic Resonance Imaging - methods</topic><topic>Male</topic><topic>Multimodal data analysis</topic><topic>Multimodal Imaging - methods</topic><topic>Schizophrenia - diagnostic imaging</topic><topic>Schizophrenia - genetics</topic><topic>Schizophrenia - physiopathology</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Song, Peilun</creatorcontrib><creatorcontrib>Yuan, Xiuxia</creatorcontrib><creatorcontrib>Li, Xue</creatorcontrib><creatorcontrib>Song, Xueqin</creatorcontrib><creatorcontrib>Wang, Yaping</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Song, Peilun</au><au>Yuan, Xiuxia</au><au>Li, Xue</au><au>Song, Xueqin</au><au>Wang, Yaping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Loss Disentangled Generative-Discriminative Learning for Multimodal Representation in Schizophrenia</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2024-11-01</date><risdate>2024</risdate><volume>28</volume><issue>11</issue><spage>6395</spage><epage>6404</epage><pages>6395-6404</pages><issn>2168-2194</issn><issn>2168-2208</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract>Schizophrenia (SCZ) is a multifactorial mental illness, thus it will be beneficial for exploring this disease using multimodal data, including functional magnetic resonance imaging (fMRI), genes, and the gut microbiome. Previous studies reported combining multimodal data can offer complementary information for better depicting the abnormalities of SCZ. However, the existing multimodal-based methods have multiple limitations. First, most approaches cannot fully use the relationships among different modalities for the downstream tasks. Second, representing multimodal data by the modality-common and modality-specific components can improve the performance of multimodal analysis but often be ignored. Third, most methods conduct the model for classification or regression, thus a unified model is needed for finishing these tasks simultaneously. To this end, a multi-loss disentangled generative-discriminative learning (MDGDL) model was developed to tackle these issues. Specifically, using disentangled learning method, the genes and gut microbial biomarkers were represented and separated into two modality-specific vectors and one modality-common vector. Then, a generative-discriminative framework was introduced to uncover the relationships between fMRI features and these three latent vectors, further producing the attentive vectors, which can help fMRI features for the downstream tasks. To validate the performance of MDGDL, an SCZ classification task and a cognitive score regression task were conducted. Results showed the MDGDL achieved superior performance and identified the most important multimodal biomarkers for the SCZ. Our proposed model could be a supplementary approach for multimodal data analysis. Based on this method, we could analyze the SCZ by combining multimodal data, and further obtain some interesting findings.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38117620</pmid><doi>10.1109/JBHI.2023.3337661</doi><tpages>10</tpages><orcidid>https://orcid.org/0009-0003-3116-581X</orcidid><orcidid>https://orcid.org/0000-0001-8124-3052</orcidid><orcidid>https://orcid.org/0000-0002-8223-2432</orcidid><orcidid>https://orcid.org/0000-0001-9830-7155</orcidid><orcidid>https://orcid.org/0009-0007-8011-3628</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2194 |
ispartof | IEEE journal of biomedical and health informatics, 2024-11, Vol.28 (11), p.6395-6404 |
issn | 2168-2194 2168-2208 2168-2208 |
language | eng |
recordid | cdi_ieee_primary_10366809 |
source | IEEE Electronic Library (IEL) |
subjects | Adult Algorithms Analytical models Biological system modeling Biomarkers Brain - diagnostic imaging Brain modeling Data models Deep learning Female Functional magnetic resonance imaging Gastrointestinal Microbiome - genetics Gastrointestinal Microbiome - physiology gene gut microbiome Humans Image Interpretation, Computer-Assisted - methods Machine Learning Magnetic Resonance Imaging - methods Male Multimodal data analysis Multimodal Imaging - methods Schizophrenia - diagnostic imaging Schizophrenia - genetics Schizophrenia - physiopathology Young Adult |
title | Multi-Loss Disentangled Generative-Discriminative Learning for Multimodal Representation in Schizophrenia |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T16%3A16%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Loss%20Disentangled%20Generative-Discriminative%20Learning%20for%20Multimodal%20Representation%20in%20Schizophrenia&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Song,%20Peilun&rft.date=2024-11-01&rft.volume=28&rft.issue=11&rft.spage=6395&rft.epage=6404&rft.pages=6395-6404&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2023.3337661&rft_dat=%3Cproquest_RIE%3E2904573090%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2904573090&rft_id=info:pmid/38117620&rft_ieee_id=10366809&rfr_iscdi=true |