Patient representation learning and interpretable evaluation using clinical notes

We have three contributions in this work: 1. We explore the utility of a stacked denoising autoencoder and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. To analyze if these representations are transferable across tasks, we evaluate the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2018-07
Hauptverfasser: Madhumita Sushil, Šuster, Simon, Luyckx, Kim, Daelemans, Walter
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Madhumita Sushil
Šuster, Simon
Luyckx, Kim
Daelemans, Walter
description We have three contributions in this work: 1. We explore the utility of a stacked denoising autoencoder and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. To analyze if these representations are transferable across tasks, we evaluate them in multiple supervised setups to predict patient mortality, primary diagnostic and procedural category, and gender. We compare their performance with sparse representations obtained from a bag-of-words model. We observe that the learned generalized representations significantly outperform the sparse representations when we have few positive instances to learn from, and there is an absence of strong lexical features. 2. We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts. In the latter case, concepts represent problems, treatments, and tests. We find that concept identification does not improve the classification performance. 3. We propose novel techniques to facilitate model interpretability. To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model. Further, we calculate feature sensitivity across two networks to identify the most significant input features for different classification tasks when we use these pretrained representations as the supervised input. We successfully extract the most influential features for the pipeline using this technique.
doi_str_mv 10.48550/arxiv.1807.01395
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1807_01395</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2073722250</sourcerecordid><originalsourceid>FETCH-LOGICAL-a520-e479d9f98df46245b99b06969d9f7b7da7baa1cbe67271c740dd8d300992f3e53</originalsourceid><addsrcrecordid>eNotj01LxDAQhoMguKz7AzxZ8Nw6TZqmOcriFyyosPcyaaaSpaY1aRf993a3nubrYXgfxm5yyIpKSrjH8OOOWV6ByiAXWl6wFRciT6uC8yu2ifEAALxUXEqxYh_vODryYxJoCBTnbp57n3SEwTv_maC3ifMjhfk8oukooSN200JN8YQ0nfOuwS7x_Ujxml222EXa_Nc12z897rcv6e7t-XX7sEtRckipUNrqVle2LUpeSKO1gVKXp6UyyqIyiHljaA6q8kYVYG1lBYDWvBUkxZrdLm_PvvUQ3BeG3_rkXZ-9Z-JuIYbQf08Ux_rQT8HPmWoOSijOuQTxBwAZXFw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2073722250</pqid></control><display><type>article</type><title>Patient representation learning and interpretable evaluation using clinical notes</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Madhumita Sushil ; Šuster, Simon ; Luyckx, Kim ; Daelemans, Walter</creator><creatorcontrib>Madhumita Sushil ; Šuster, Simon ; Luyckx, Kim ; Daelemans, Walter</creatorcontrib><description>We have three contributions in this work: 1. We explore the utility of a stacked denoising autoencoder and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. To analyze if these representations are transferable across tasks, we evaluate them in multiple supervised setups to predict patient mortality, primary diagnostic and procedural category, and gender. We compare their performance with sparse representations obtained from a bag-of-words model. We observe that the learned generalized representations significantly outperform the sparse representations when we have few positive instances to learn from, and there is an absence of strong lexical features. 2. We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts. In the latter case, concepts represent problems, treatments, and tests. We find that concept identification does not improve the classification performance. 3. We propose novel techniques to facilitate model interpretability. To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model. Further, we calculate feature sensitivity across two networks to identify the most significant input features for different classification tasks when we use these pretrained representations as the supervised input. We successfully extract the most influential features for the pipeline using this technique.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1807.01395</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Classification ; Coding ; Computer Science - Computation and Language ; Computer Science - Learning ; Diagnostic systems ; Feature extraction ; Noise reduction ; Representations</subject><ispartof>arXiv.org, 2018-07</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1016/j.jbi.2018.06.016$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1807.01395$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Madhumita Sushil</creatorcontrib><creatorcontrib>Šuster, Simon</creatorcontrib><creatorcontrib>Luyckx, Kim</creatorcontrib><creatorcontrib>Daelemans, Walter</creatorcontrib><title>Patient representation learning and interpretable evaluation using clinical notes</title><title>arXiv.org</title><description>We have three contributions in this work: 1. We explore the utility of a stacked denoising autoencoder and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. To analyze if these representations are transferable across tasks, we evaluate them in multiple supervised setups to predict patient mortality, primary diagnostic and procedural category, and gender. We compare their performance with sparse representations obtained from a bag-of-words model. We observe that the learned generalized representations significantly outperform the sparse representations when we have few positive instances to learn from, and there is an absence of strong lexical features. 2. We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts. In the latter case, concepts represent problems, treatments, and tests. We find that concept identification does not improve the classification performance. 3. We propose novel techniques to facilitate model interpretability. To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model. Further, we calculate feature sensitivity across two networks to identify the most significant input features for different classification tasks when we use these pretrained representations as the supervised input. We successfully extract the most influential features for the pipeline using this technique.</description><subject>Classification</subject><subject>Coding</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Diagnostic systems</subject><subject>Feature extraction</subject><subject>Noise reduction</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01LxDAQhoMguKz7AzxZ8Nw6TZqmOcriFyyosPcyaaaSpaY1aRf993a3nubrYXgfxm5yyIpKSrjH8OOOWV6ByiAXWl6wFRciT6uC8yu2ifEAALxUXEqxYh_vODryYxJoCBTnbp57n3SEwTv_maC3ifMjhfk8oukooSN200JN8YQ0nfOuwS7x_Ujxml222EXa_Nc12z897rcv6e7t-XX7sEtRckipUNrqVle2LUpeSKO1gVKXp6UyyqIyiHljaA6q8kYVYG1lBYDWvBUkxZrdLm_PvvUQ3BeG3_rkXZ-9Z-JuIYbQf08Ux_rQT8HPmWoOSijOuQTxBwAZXFw</recordid><startdate>20180703</startdate><enddate>20180703</enddate><creator>Madhumita Sushil</creator><creator>Šuster, Simon</creator><creator>Luyckx, Kim</creator><creator>Daelemans, Walter</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180703</creationdate><title>Patient representation learning and interpretable evaluation using clinical notes</title><author>Madhumita Sushil ; Šuster, Simon ; Luyckx, Kim ; Daelemans, Walter</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a520-e479d9f98df46245b99b06969d9f7b7da7baa1cbe67271c740dd8d300992f3e53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Classification</topic><topic>Coding</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Diagnostic systems</topic><topic>Feature extraction</topic><topic>Noise reduction</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Madhumita Sushil</creatorcontrib><creatorcontrib>Šuster, Simon</creatorcontrib><creatorcontrib>Luyckx, Kim</creatorcontrib><creatorcontrib>Daelemans, Walter</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Madhumita Sushil</au><au>Šuster, Simon</au><au>Luyckx, Kim</au><au>Daelemans, Walter</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Patient representation learning and interpretable evaluation using clinical notes</atitle><jtitle>arXiv.org</jtitle><date>2018-07-03</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>We have three contributions in this work: 1. We explore the utility of a stacked denoising autoencoder and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. To analyze if these representations are transferable across tasks, we evaluate them in multiple supervised setups to predict patient mortality, primary diagnostic and procedural category, and gender. We compare their performance with sparse representations obtained from a bag-of-words model. We observe that the learned generalized representations significantly outperform the sparse representations when we have few positive instances to learn from, and there is an absence of strong lexical features. 2. We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts. In the latter case, concepts represent problems, treatments, and tests. We find that concept identification does not improve the classification performance. 3. We propose novel techniques to facilitate model interpretability. To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model. Further, we calculate feature sensitivity across two networks to identify the most significant input features for different classification tasks when we use these pretrained representations as the supervised input. We successfully extract the most influential features for the pipeline using this technique.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1807.01395</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2018-07
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1807_01395
source arXiv.org; Free E- Journals
subjects Classification
Coding
Computer Science - Computation and Language
Computer Science - Learning
Diagnostic systems
Feature extraction
Noise reduction
Representations
title Patient representation learning and interpretable evaluation using clinical notes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T07%3A32%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Patient%20representation%20learning%20and%20interpretable%20evaluation%20using%20clinical%20notes&rft.jtitle=arXiv.org&rft.au=Madhumita%20Sushil&rft.date=2018-07-03&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1807.01395&rft_dat=%3Cproquest_arxiv%3E2073722250%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2073722250&rft_id=info:pmid/&rfr_iscdi=true