Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond

With the emergence of multimodal electronic health records, the evidence for diseases, events, or findings may be present across multiple modalities ranging from clinical to imaging and genomic data. Developing effective patient-tailored therapeutic guidance and outcome prediction will require fusin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2024-04, Vol.93, p.103064-103064, Article 103064
Hauptverfasser: D‘Souza, Niharika S., Wang, Hongzhi, Giovannini, Andrea, Foncubierta-Rodriguez, Antonio, Beck, Kristen L., Boyko, Orest, Syeda-Mahmood, Tanveer F.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 103064
container_issue
container_start_page 103064
container_title Medical image analysis
container_volume 93
creator D‘Souza, Niharika S.
Wang, Hongzhi
Giovannini, Andrea
Foncubierta-Rodriguez, Antonio
Beck, Kristen L.
Boyko, Orest
Syeda-Mahmood, Tanveer F.
description With the emergence of multimodal electronic health records, the evidence for diseases, events, or findings may be present across multiple modalities ranging from clinical to imaging and genomic data. Developing effective patient-tailored therapeutic guidance and outcome prediction will require fusing evidence across these modalities. Developing general-purpose frameworks capable of modeling fine-grained and multi-faceted complex interactions, both within and across modalities is an important open problem in multimodal fusion. Generalized multimodal fusion is extremely challenging as evidence for outcomes may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients, due to which simple methods of early, late, or intermediate fusion may be inadequate. In this paper, we present a novel approach that uses the machinery of multiplexed graphs for fusion. This allows for modalities to be represented through their targeted encodings. We model their relationship between explicitly via multiplexed graphs derived from salient features in a combined latent space. We then derive a new graph neural network for multiplex graphs for task-informed reasoning. We compare our framework against several state-of-the-art approaches for multi-graph reasoning and multimodal fusion. As a sanity check on the neural network design, we evaluate the multiplexed GNN on two popular benchmark datasets, namely the AIFB and the MUTAG dataset against several state-of-the-art multi-relational GNNs for reasoning. Second, we evaluate our multiplexed framework against several state-of-the-art multimodal fusion frameworks on two large clinical datasets for two separate applications. The first is the NIH-TB portals dataset for treatment outcome prediction in Tuberculosis, and the second is the ABIDE dataset for Autism Spectrum Disorder classification. Through rigorous experimental evaluation, we demonstrate that the multiplexed GNN provides robust performance improvements in all of these diverse applications. •We present a novel approach that uses the native formulation of multiplexed graphs for multimodal fusion.•We model modality feature relationship explicitly via multiplexed graphs derived from salient features in a combined latent space.•We derive a new graph neural network for multiplex graphs for task-informed reasoning.•We compare our framework against several state-of-the-art multi-graph reasoning on different b
doi_str_mv 10.1016/j.media.2023.103064
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2914256365</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1361841523003249</els_id><sourcerecordid>2914256365</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-ec7afe0abf074436666c6909675dc2296ab6ada6ebe24d4240b1feb86ed8836c3</originalsourceid><addsrcrecordid>eNp9kE1PwzAMhiMEgjH4BUgoRy4b-Wiz9sABTQyQJnGBc5Qm7shom5KkwP49GYMd8cWW89h-8yJ0QcmUEiqu19MWjFVTRhhPHU5EdoBGlAs6KTLGD_c1zU_QaQhrQsgsy8gxOuEFo2VOyAitF0Ow3Qq3zqjGRgsBVxvcDk20fQNfYPDKq_4VdzB41aQUP51_C7h2HrshatcC7n3SoaN1Ha69a_FWlk6wUVFh1RlcwcZ15gwd1aoJcP6bx-hlcfc8f5gsn-4f57fLieZ5GSegZ6oGoqp6q5aLFFqUpBSz3GjGSqEqoYwSUAHLTMYyUtEaqkKAKQouNB-jq93e3rv3AUKUrQ0amkZ14IYgWUkzlgsu8oTyHaq9C8FDLXtvW-U3khK5NVmu5Y_Jcmuy3Jmcpi5_DwxVet3P_LmagJsdAOmbHxa8DNpCp9MmDzpK4-y_B74B8ziQPQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2914256365</pqid></control><display><type>article</type><title>Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals</source><creator>D‘Souza, Niharika S. ; Wang, Hongzhi ; Giovannini, Andrea ; Foncubierta-Rodriguez, Antonio ; Beck, Kristen L. ; Boyko, Orest ; Syeda-Mahmood, Tanveer F.</creator><creatorcontrib>D‘Souza, Niharika S. ; Wang, Hongzhi ; Giovannini, Andrea ; Foncubierta-Rodriguez, Antonio ; Beck, Kristen L. ; Boyko, Orest ; Syeda-Mahmood, Tanveer F.</creatorcontrib><description>With the emergence of multimodal electronic health records, the evidence for diseases, events, or findings may be present across multiple modalities ranging from clinical to imaging and genomic data. Developing effective patient-tailored therapeutic guidance and outcome prediction will require fusing evidence across these modalities. Developing general-purpose frameworks capable of modeling fine-grained and multi-faceted complex interactions, both within and across modalities is an important open problem in multimodal fusion. Generalized multimodal fusion is extremely challenging as evidence for outcomes may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients, due to which simple methods of early, late, or intermediate fusion may be inadequate. In this paper, we present a novel approach that uses the machinery of multiplexed graphs for fusion. This allows for modalities to be represented through their targeted encodings. We model their relationship between explicitly via multiplexed graphs derived from salient features in a combined latent space. We then derive a new graph neural network for multiplex graphs for task-informed reasoning. We compare our framework against several state-of-the-art approaches for multi-graph reasoning and multimodal fusion. As a sanity check on the neural network design, we evaluate the multiplexed GNN on two popular benchmark datasets, namely the AIFB and the MUTAG dataset against several state-of-the-art multi-relational GNNs for reasoning. Second, we evaluate our multiplexed framework against several state-of-the-art multimodal fusion frameworks on two large clinical datasets for two separate applications. The first is the NIH-TB portals dataset for treatment outcome prediction in Tuberculosis, and the second is the ABIDE dataset for Autism Spectrum Disorder classification. Through rigorous experimental evaluation, we demonstrate that the multiplexed GNN provides robust performance improvements in all of these diverse applications. •We present a novel approach that uses the native formulation of multiplexed graphs for multimodal fusion.•We model modality feature relationship explicitly via multiplexed graphs derived from salient features in a combined latent space.•We derive a new graph neural network for multiplex graphs for task-informed reasoning.•We compare our framework against several state-of-the-art multi-graph reasoning on different benchmark graph datasets.•Our framework provides robust improvements against several competing fusion approaches for various clinical applications.</description><identifier>ISSN: 1361-8415</identifier><identifier>EISSN: 1361-8423</identifier><identifier>DOI: 10.1016/j.media.2023.103064</identifier><identifier>PMID: 38219500</identifier><language>eng</language><publisher>Netherlands: Elsevier B.V</publisher><subject>Autism Spectrum Disorder ; Benchmarking ; Graph neural networks ; Humans ; Multimodal fusion ; Multiplex graphs ; Neural Networks, Computer ; Prognosis</subject><ispartof>Medical image analysis, 2024-04, Vol.93, p.103064-103064, Article 103064</ispartof><rights>2023</rights><rights>Copyright © 2023. Published by Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-ec7afe0abf074436666c6909675dc2296ab6ada6ebe24d4240b1feb86ed8836c3</citedby><cites>FETCH-LOGICAL-c359t-ec7afe0abf074436666c6909675dc2296ab6ada6ebe24d4240b1feb86ed8836c3</cites><orcidid>0000-0003-0059-3208 ; 0000-0002-4858-2981 ; 0000-0003-0118-7252</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S1361841523003249$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38219500$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>D‘Souza, Niharika S.</creatorcontrib><creatorcontrib>Wang, Hongzhi</creatorcontrib><creatorcontrib>Giovannini, Andrea</creatorcontrib><creatorcontrib>Foncubierta-Rodriguez, Antonio</creatorcontrib><creatorcontrib>Beck, Kristen L.</creatorcontrib><creatorcontrib>Boyko, Orest</creatorcontrib><creatorcontrib>Syeda-Mahmood, Tanveer F.</creatorcontrib><title>Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond</title><title>Medical image analysis</title><addtitle>Med Image Anal</addtitle><description>With the emergence of multimodal electronic health records, the evidence for diseases, events, or findings may be present across multiple modalities ranging from clinical to imaging and genomic data. Developing effective patient-tailored therapeutic guidance and outcome prediction will require fusing evidence across these modalities. Developing general-purpose frameworks capable of modeling fine-grained and multi-faceted complex interactions, both within and across modalities is an important open problem in multimodal fusion. Generalized multimodal fusion is extremely challenging as evidence for outcomes may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients, due to which simple methods of early, late, or intermediate fusion may be inadequate. In this paper, we present a novel approach that uses the machinery of multiplexed graphs for fusion. This allows for modalities to be represented through their targeted encodings. We model their relationship between explicitly via multiplexed graphs derived from salient features in a combined latent space. We then derive a new graph neural network for multiplex graphs for task-informed reasoning. We compare our framework against several state-of-the-art approaches for multi-graph reasoning and multimodal fusion. As a sanity check on the neural network design, we evaluate the multiplexed GNN on two popular benchmark datasets, namely the AIFB and the MUTAG dataset against several state-of-the-art multi-relational GNNs for reasoning. Second, we evaluate our multiplexed framework against several state-of-the-art multimodal fusion frameworks on two large clinical datasets for two separate applications. The first is the NIH-TB portals dataset for treatment outcome prediction in Tuberculosis, and the second is the ABIDE dataset for Autism Spectrum Disorder classification. Through rigorous experimental evaluation, we demonstrate that the multiplexed GNN provides robust performance improvements in all of these diverse applications. •We present a novel approach that uses the native formulation of multiplexed graphs for multimodal fusion.•We model modality feature relationship explicitly via multiplexed graphs derived from salient features in a combined latent space.•We derive a new graph neural network for multiplex graphs for task-informed reasoning.•We compare our framework against several state-of-the-art multi-graph reasoning on different benchmark graph datasets.•Our framework provides robust improvements against several competing fusion approaches for various clinical applications.</description><subject>Autism Spectrum Disorder</subject><subject>Benchmarking</subject><subject>Graph neural networks</subject><subject>Humans</subject><subject>Multimodal fusion</subject><subject>Multiplex graphs</subject><subject>Neural Networks, Computer</subject><subject>Prognosis</subject><issn>1361-8415</issn><issn>1361-8423</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kE1PwzAMhiMEgjH4BUgoRy4b-Wiz9sABTQyQJnGBc5Qm7shom5KkwP49GYMd8cWW89h-8yJ0QcmUEiqu19MWjFVTRhhPHU5EdoBGlAs6KTLGD_c1zU_QaQhrQsgsy8gxOuEFo2VOyAitF0Ow3Qq3zqjGRgsBVxvcDk20fQNfYPDKq_4VdzB41aQUP51_C7h2HrshatcC7n3SoaN1Ha69a_FWlk6wUVFh1RlcwcZ15gwd1aoJcP6bx-hlcfc8f5gsn-4f57fLieZ5GSegZ6oGoqp6q5aLFFqUpBSz3GjGSqEqoYwSUAHLTMYyUtEaqkKAKQouNB-jq93e3rv3AUKUrQ0amkZ14IYgWUkzlgsu8oTyHaq9C8FDLXtvW-U3khK5NVmu5Y_Jcmuy3Jmcpi5_DwxVet3P_LmagJsdAOmbHxa8DNpCp9MmDzpK4-y_B74B8ziQPQ</recordid><startdate>202404</startdate><enddate>202404</enddate><creator>D‘Souza, Niharika S.</creator><creator>Wang, Hongzhi</creator><creator>Giovannini, Andrea</creator><creator>Foncubierta-Rodriguez, Antonio</creator><creator>Beck, Kristen L.</creator><creator>Boyko, Orest</creator><creator>Syeda-Mahmood, Tanveer F.</creator><general>Elsevier B.V</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-0059-3208</orcidid><orcidid>https://orcid.org/0000-0002-4858-2981</orcidid><orcidid>https://orcid.org/0000-0003-0118-7252</orcidid></search><sort><creationdate>202404</creationdate><title>Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond</title><author>D‘Souza, Niharika S. ; Wang, Hongzhi ; Giovannini, Andrea ; Foncubierta-Rodriguez, Antonio ; Beck, Kristen L. ; Boyko, Orest ; Syeda-Mahmood, Tanveer F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-ec7afe0abf074436666c6909675dc2296ab6ada6ebe24d4240b1feb86ed8836c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Autism Spectrum Disorder</topic><topic>Benchmarking</topic><topic>Graph neural networks</topic><topic>Humans</topic><topic>Multimodal fusion</topic><topic>Multiplex graphs</topic><topic>Neural Networks, Computer</topic><topic>Prognosis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>D‘Souza, Niharika S.</creatorcontrib><creatorcontrib>Wang, Hongzhi</creatorcontrib><creatorcontrib>Giovannini, Andrea</creatorcontrib><creatorcontrib>Foncubierta-Rodriguez, Antonio</creatorcontrib><creatorcontrib>Beck, Kristen L.</creatorcontrib><creatorcontrib>Boyko, Orest</creatorcontrib><creatorcontrib>Syeda-Mahmood, Tanveer F.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical image analysis</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>D‘Souza, Niharika S.</au><au>Wang, Hongzhi</au><au>Giovannini, Andrea</au><au>Foncubierta-Rodriguez, Antonio</au><au>Beck, Kristen L.</au><au>Boyko, Orest</au><au>Syeda-Mahmood, Tanveer F.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond</atitle><jtitle>Medical image analysis</jtitle><addtitle>Med Image Anal</addtitle><date>2024-04</date><risdate>2024</risdate><volume>93</volume><spage>103064</spage><epage>103064</epage><pages>103064-103064</pages><artnum>103064</artnum><issn>1361-8415</issn><eissn>1361-8423</eissn><abstract>With the emergence of multimodal electronic health records, the evidence for diseases, events, or findings may be present across multiple modalities ranging from clinical to imaging and genomic data. Developing effective patient-tailored therapeutic guidance and outcome prediction will require fusing evidence across these modalities. Developing general-purpose frameworks capable of modeling fine-grained and multi-faceted complex interactions, both within and across modalities is an important open problem in multimodal fusion. Generalized multimodal fusion is extremely challenging as evidence for outcomes may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients, due to which simple methods of early, late, or intermediate fusion may be inadequate. In this paper, we present a novel approach that uses the machinery of multiplexed graphs for fusion. This allows for modalities to be represented through their targeted encodings. We model their relationship between explicitly via multiplexed graphs derived from salient features in a combined latent space. We then derive a new graph neural network for multiplex graphs for task-informed reasoning. We compare our framework against several state-of-the-art approaches for multi-graph reasoning and multimodal fusion. As a sanity check on the neural network design, we evaluate the multiplexed GNN on two popular benchmark datasets, namely the AIFB and the MUTAG dataset against several state-of-the-art multi-relational GNNs for reasoning. Second, we evaluate our multiplexed framework against several state-of-the-art multimodal fusion frameworks on two large clinical datasets for two separate applications. The first is the NIH-TB portals dataset for treatment outcome prediction in Tuberculosis, and the second is the ABIDE dataset for Autism Spectrum Disorder classification. Through rigorous experimental evaluation, we demonstrate that the multiplexed GNN provides robust performance improvements in all of these diverse applications. •We present a novel approach that uses the native formulation of multiplexed graphs for multimodal fusion.•We model modality feature relationship explicitly via multiplexed graphs derived from salient features in a combined latent space.•We derive a new graph neural network for multiplex graphs for task-informed reasoning.•We compare our framework against several state-of-the-art multi-graph reasoning on different benchmark graph datasets.•Our framework provides robust improvements against several competing fusion approaches for various clinical applications.</abstract><cop>Netherlands</cop><pub>Elsevier B.V</pub><pmid>38219500</pmid><doi>10.1016/j.media.2023.103064</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0003-0059-3208</orcidid><orcidid>https://orcid.org/0000-0002-4858-2981</orcidid><orcidid>https://orcid.org/0000-0003-0118-7252</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1361-8415
ispartof Medical image analysis, 2024-04, Vol.93, p.103064-103064, Article 103064
issn 1361-8415
1361-8423
language eng
recordid cdi_proquest_miscellaneous_2914256365
source MEDLINE; Elsevier ScienceDirect Journals
subjects Autism Spectrum Disorder
Benchmarking
Graph neural networks
Humans
Multimodal fusion
Multiplex graphs
Neural Networks, Computer
Prognosis
title Fusing modalities by multiplexed graph neural networks for outcome prediction from medical data and beyond
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T21%3A15%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fusing%20modalities%20by%20multiplexed%20graph%20neural%20networks%20for%20outcome%20prediction%20from%20medical%20data%20and%20beyond&rft.jtitle=Medical%20image%20analysis&rft.au=D%E2%80%98Souza,%20Niharika%20S.&rft.date=2024-04&rft.volume=93&rft.spage=103064&rft.epage=103064&rft.pages=103064-103064&rft.artnum=103064&rft.issn=1361-8415&rft.eissn=1361-8423&rft_id=info:doi/10.1016/j.media.2023.103064&rft_dat=%3Cproquest_cross%3E2914256365%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2914256365&rft_id=info:pmid/38219500&rft_els_id=S1361841523003249&rfr_iscdi=true