Adapting and evaluating a deep learning language model for clinical why-question answering
ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question...
Gespeichert in:
Veröffentlicht in: | JAMIA open 2020-04, Vol.3 (1), p.16-20 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 20 |
---|---|
container_issue | 1 |
container_start_page | 16 |
container_title | JAMIA open |
container_volume | 3 |
creator | Wen, Andrew Elwazir, Mohamed Y Moon, Sungrim Fan, Jungwei |
description | ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis.ResultsThe best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy.DiscussionThe error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions.ConclusionThe BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction. |
doi_str_mv | 10.1093/jamiaopen/ooz072 |
format | Article |
fullrecord | <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7309262</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A688856530</galeid><oup_id>10.1093/jamiaopen/ooz072</oup_id><sourcerecordid>A688856530</sourcerecordid><originalsourceid>FETCH-LOGICAL-c401t-d69726f77b9e95997e16c8b46853569fb8bc1d3d233b05db729c272d17485a163</originalsourceid><addsrcrecordid>eNqFkU1vFSEYhYnR2KZ278rM0sRMy8cAw8bkplFr0sSNbtwQBt6Z0jAwwp029ddLnfamrgwLeOGcJ5wchN4SfEawYuc3ZvYmLRDPU_qNJX2BjimXXUs5Iy-fnY_QaSk3GGOilBIMv0ZHjAosu54do587Z5a9j1Njomvg1oTVbGPjAJYmgMnxYQ4mTquZoJmTg9CMKTc2-OitCc3d9X37a4Wy9ylWTrmDXC1v0KvRhAKnj_sJ-vH50_eLy_bq25evF7ur1naY7FsnlKRilHJQoLhSEoiw_dCJnjMu1Dj0gyWOOcrYgLkbJFWWSupIDcANEewEfdy4yzrM4CzEfTZBL9nPJt_rZLz-9yX6az2lWy0ZVlTQCnj_CMjpbww9-2Ih1MiQ1qJpR1RHOSekSs826WQCaB_HVIm2LgeztynC6Ov9TvR9zwVnuBrwZrA5lZJhPPyLYP1Qoz7UqLcaq-Xd8zwHw1NpVfBhE6R1-T_uD7KJrDg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2419425511</pqid></control><display><type>article</type><title>Adapting and evaluating a deep learning language model for clinical why-question answering</title><source>Oxford Journals Open Access Collection</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>PubMed Central</source><creator>Wen, Andrew ; Elwazir, Mohamed Y ; Moon, Sungrim ; Fan, Jungwei</creator><creatorcontrib>Wen, Andrew ; Elwazir, Mohamed Y ; Moon, Sungrim ; Fan, Jungwei</creatorcontrib><description>ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis.ResultsThe best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy.DiscussionThe error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions.ConclusionThe BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction.</description><identifier>ISSN: 2574-2531</identifier><identifier>EISSN: 2574-2531</identifier><identifier>DOI: 10.1093/jamiaopen/ooz072</identifier><identifier>PMID: 32607483</identifier><language>eng</language><publisher>United States: Oxford University Press</publisher><subject>Brief Communication ; Computational linguistics ; Language processing ; Natural language interfaces</subject><ispartof>JAMIA open, 2020-04, Vol.3 (1), p.16-20</ispartof><rights>The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. 2020</rights><rights>The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association.</rights><rights>COPYRIGHT 2020 Oxford University Press</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c401t-d69726f77b9e95997e16c8b46853569fb8bc1d3d233b05db729c272d17485a163</citedby><cites>FETCH-LOGICAL-c401t-d69726f77b9e95997e16c8b46853569fb8bc1d3d233b05db729c272d17485a163</cites><orcidid>0000-0001-6349-3752</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7309262/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7309262/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,1598,27901,27902,53766,53768</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32607483$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wen, Andrew</creatorcontrib><creatorcontrib>Elwazir, Mohamed Y</creatorcontrib><creatorcontrib>Moon, Sungrim</creatorcontrib><creatorcontrib>Fan, Jungwei</creatorcontrib><title>Adapting and evaluating a deep learning language model for clinical why-question answering</title><title>JAMIA open</title><addtitle>JAMIA Open</addtitle><description>ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis.ResultsThe best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy.DiscussionThe error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions.ConclusionThe BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction.</description><subject>Brief Communication</subject><subject>Computational linguistics</subject><subject>Language processing</subject><subject>Natural language interfaces</subject><issn>2574-2531</issn><issn>2574-2531</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>TOX</sourceid><recordid>eNqFkU1vFSEYhYnR2KZ278rM0sRMy8cAw8bkplFr0sSNbtwQBt6Z0jAwwp029ddLnfamrgwLeOGcJ5wchN4SfEawYuc3ZvYmLRDPU_qNJX2BjimXXUs5Iy-fnY_QaSk3GGOilBIMv0ZHjAosu54do587Z5a9j1Njomvg1oTVbGPjAJYmgMnxYQ4mTquZoJmTg9CMKTc2-OitCc3d9X37a4Wy9ylWTrmDXC1v0KvRhAKnj_sJ-vH50_eLy_bq25evF7ur1naY7FsnlKRilHJQoLhSEoiw_dCJnjMu1Dj0gyWOOcrYgLkbJFWWSupIDcANEewEfdy4yzrM4CzEfTZBL9nPJt_rZLz-9yX6az2lWy0ZVlTQCnj_CMjpbww9-2Ih1MiQ1qJpR1RHOSekSs826WQCaB_HVIm2LgeztynC6Ov9TvR9zwVnuBrwZrA5lZJhPPyLYP1Qoz7UqLcaq-Xd8zwHw1NpVfBhE6R1-T_uD7KJrDg</recordid><startdate>20200401</startdate><enddate>20200401</enddate><creator>Wen, Andrew</creator><creator>Elwazir, Mohamed Y</creator><creator>Moon, Sungrim</creator><creator>Fan, Jungwei</creator><general>Oxford University Press</general><scope>TOX</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-6349-3752</orcidid></search><sort><creationdate>20200401</creationdate><title>Adapting and evaluating a deep learning language model for clinical why-question answering</title><author>Wen, Andrew ; Elwazir, Mohamed Y ; Moon, Sungrim ; Fan, Jungwei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c401t-d69726f77b9e95997e16c8b46853569fb8bc1d3d233b05db729c272d17485a163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Brief Communication</topic><topic>Computational linguistics</topic><topic>Language processing</topic><topic>Natural language interfaces</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wen, Andrew</creatorcontrib><creatorcontrib>Elwazir, Mohamed Y</creatorcontrib><creatorcontrib>Moon, Sungrim</creatorcontrib><creatorcontrib>Fan, Jungwei</creatorcontrib><collection>Oxford Journals Open Access Collection</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>JAMIA open</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wen, Andrew</au><au>Elwazir, Mohamed Y</au><au>Moon, Sungrim</au><au>Fan, Jungwei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adapting and evaluating a deep learning language model for clinical why-question answering</atitle><jtitle>JAMIA open</jtitle><addtitle>JAMIA Open</addtitle><date>2020-04-01</date><risdate>2020</risdate><volume>3</volume><issue>1</issue><spage>16</spage><epage>20</epage><pages>16-20</pages><issn>2574-2531</issn><eissn>2574-2531</eissn><abstract>ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis.ResultsThe best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy.DiscussionThe error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions.ConclusionThe BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction.</abstract><cop>United States</cop><pub>Oxford University Press</pub><pmid>32607483</pmid><doi>10.1093/jamiaopen/ooz072</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0001-6349-3752</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2574-2531 |
ispartof | JAMIA open, 2020-04, Vol.3 (1), p.16-20 |
issn | 2574-2531 2574-2531 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7309262 |
source | Oxford Journals Open Access Collection; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; PubMed Central |
subjects | Brief Communication Computational linguistics Language processing Natural language interfaces |
title | Adapting and evaluating a deep learning language model for clinical why-question answering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T00%3A41%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adapting%20and%20evaluating%20a%20deep%20learning%20language%20model%20for%20clinical%20why-question%20answering&rft.jtitle=JAMIA%20open&rft.au=Wen,%20Andrew&rft.date=2020-04-01&rft.volume=3&rft.issue=1&rft.spage=16&rft.epage=20&rft.pages=16-20&rft.issn=2574-2531&rft.eissn=2574-2531&rft_id=info:doi/10.1093/jamiaopen/ooz072&rft_dat=%3Cgale_pubme%3EA688856530%3C/gale_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2419425511&rft_id=info:pmid/32607483&rft_galeid=A688856530&rft_oup_id=10.1093/jamiaopen/ooz072&rfr_iscdi=true |