Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention

Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the tem...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lee, Kwanhyung, Lee, Soojeong, Hahn, Sangchul, Hyun, Heejung, Choi, Edward, Ahn, Byungeun, Lee, Joohyung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lee, Kwanhyung
Lee, Soojeong
Hahn, Sangchul
Hyun, Heejung
Choi, Edward
Ahn, Byungeun
Lee, Joohyung
description Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the temporal relationship between different EHR modalities. On the other hand, most EHR studies are limited to relying only on EHR Times-series, and therefore, missing modality in EHR has not been well-explored. Therefore, in this study, we introduce a Unified Multi-modal Set Embedding (UMSE) and Modality-Aware Attention (MAA) with Skip Bottleneck (SB). UMSE treats all EHR modalities without a separate imputation module or error-prone carry-forward, whereas MAA with SB learns missing modal EHR with effective modality-aware attention. Our model outperforms other baseline models in mortality, vasopressor need, and intubation need prediction with the MIMIC-IV dataset.
doi_str_mv 10.48550/arxiv.2305.02504
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_02504</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_02504</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-c5c8c7f3eb2a531b36507e6e5ed10ed6f4f27f172f7d914b671267304ce4ff613</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDMOEXSLDjvzJGJVCkVkiozNGNfQ2WEgc5LqVvD007HX3Dd6RDyB1npVwqxR4g_YafshJMlaxSTF6T_QYhxRA_6TZM08zRQU-bHm1OYwyWrhH6_EXf0Y7JTfQQ_sdHDD6go9t9n0MxzJcnyECboUPnTh6I7uwK-VjUB0hI65wx5jDGG3LloZ_w9sIF2T03u9W62Ly9vK7qTQHayMIqu7TGC-wqUIJ3QitmUKNCxxk67aWvjOem8sY9ctlpwyttBJMWpfeaiwW5P2vn7vY7hQHSsT31t3O_-AOLbla9</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention</title><source>arXiv.org</source><creator>Lee, Kwanhyung ; Lee, Soojeong ; Hahn, Sangchul ; Hyun, Heejung ; Choi, Edward ; Ahn, Byungeun ; Lee, Joohyung</creator><creatorcontrib>Lee, Kwanhyung ; Lee, Soojeong ; Hahn, Sangchul ; Hyun, Heejung ; Choi, Edward ; Ahn, Byungeun ; Lee, Joohyung</creatorcontrib><description>Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the temporal relationship between different EHR modalities. On the other hand, most EHR studies are limited to relying only on EHR Times-series, and therefore, missing modality in EHR has not been well-explored. Therefore, in this study, we introduce a Unified Multi-modal Set Embedding (UMSE) and Modality-Aware Attention (MAA) with Skip Bottleneck (SB). UMSE treats all EHR modalities without a separate imputation module or error-prone carry-forward, whereas MAA with SB learns missing modal EHR with effective modality-aware attention. Our model outperforms other baseline models in mortality, vasopressor need, and intubation need prediction with the MIMIC-IV dataset.</description><identifier>DOI: 10.48550/arxiv.2305.02504</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.02504$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.02504$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lee, Kwanhyung</creatorcontrib><creatorcontrib>Lee, Soojeong</creatorcontrib><creatorcontrib>Hahn, Sangchul</creatorcontrib><creatorcontrib>Hyun, Heejung</creatorcontrib><creatorcontrib>Choi, Edward</creatorcontrib><creatorcontrib>Ahn, Byungeun</creatorcontrib><creatorcontrib>Lee, Joohyung</creatorcontrib><title>Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention</title><description>Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the temporal relationship between different EHR modalities. On the other hand, most EHR studies are limited to relying only on EHR Times-series, and therefore, missing modality in EHR has not been well-explored. Therefore, in this study, we introduce a Unified Multi-modal Set Embedding (UMSE) and Modality-Aware Attention (MAA) with Skip Bottleneck (SB). UMSE treats all EHR modalities without a separate imputation module or error-prone carry-forward, whereas MAA with SB learns missing modal EHR with effective modality-aware attention. Our model outperforms other baseline models in mortality, vasopressor need, and intubation need prediction with the MIMIC-IV dataset.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDMOEXSLDjvzJGJVCkVkiozNGNfQ2WEgc5LqVvD007HX3Dd6RDyB1npVwqxR4g_YafshJMlaxSTF6T_QYhxRA_6TZM08zRQU-bHm1OYwyWrhH6_EXf0Y7JTfQQ_sdHDD6go9t9n0MxzJcnyECboUPnTh6I7uwK-VjUB0hI65wx5jDGG3LloZ_w9sIF2T03u9W62Ly9vK7qTQHayMIqu7TGC-wqUIJ3QitmUKNCxxk67aWvjOem8sY9ctlpwyttBJMWpfeaiwW5P2vn7vY7hQHSsT31t3O_-AOLbla9</recordid><startdate>20230503</startdate><enddate>20230503</enddate><creator>Lee, Kwanhyung</creator><creator>Lee, Soojeong</creator><creator>Hahn, Sangchul</creator><creator>Hyun, Heejung</creator><creator>Choi, Edward</creator><creator>Ahn, Byungeun</creator><creator>Lee, Joohyung</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230503</creationdate><title>Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention</title><author>Lee, Kwanhyung ; Lee, Soojeong ; Hahn, Sangchul ; Hyun, Heejung ; Choi, Edward ; Ahn, Byungeun ; Lee, Joohyung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-c5c8c7f3eb2a531b36507e6e5ed10ed6f4f27f172f7d914b671267304ce4ff613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Kwanhyung</creatorcontrib><creatorcontrib>Lee, Soojeong</creatorcontrib><creatorcontrib>Hahn, Sangchul</creatorcontrib><creatorcontrib>Hyun, Heejung</creatorcontrib><creatorcontrib>Choi, Edward</creatorcontrib><creatorcontrib>Ahn, Byungeun</creatorcontrib><creatorcontrib>Lee, Joohyung</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Kwanhyung</au><au>Lee, Soojeong</au><au>Hahn, Sangchul</au><au>Hyun, Heejung</au><au>Choi, Edward</au><au>Ahn, Byungeun</au><au>Lee, Joohyung</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention</atitle><date>2023-05-03</date><risdate>2023</risdate><abstract>Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the temporal relationship between different EHR modalities. On the other hand, most EHR studies are limited to relying only on EHR Times-series, and therefore, missing modality in EHR has not been well-explored. Therefore, in this study, we introduce a Unified Multi-modal Set Embedding (UMSE) and Modality-Aware Attention (MAA) with Skip Bottleneck (SB). UMSE treats all EHR modalities without a separate imputation module or error-prone carry-forward, whereas MAA with SB learns missing modal EHR with effective modality-aware attention. Our model outperforms other baseline models in mortality, vasopressor need, and intubation need prediction with the MIMIC-IV dataset.</abstract><doi>10.48550/arxiv.2305.02504</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.02504
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_02504
source arXiv.org
subjects Computer Science - Learning
title Learning Missing Modal Electronic Health Records with Unified Multi-modal Data Embedding and Modality-Aware Attention
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T21%3A24%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Missing%20Modal%20Electronic%20Health%20Records%20with%20Unified%20Multi-modal%20Data%20Embedding%20and%20Modality-Aware%20Attention&rft.au=Lee,%20Kwanhyung&rft.date=2023-05-03&rft_id=info:doi/10.48550/arxiv.2305.02504&rft_dat=%3Carxiv_GOX%3E2305_02504%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true