Adapting and evaluating a deep learning language model for clinical why-question answering

ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:JAMIA open 2020-04, Vol.3 (1), p.16-20
Hauptverfasser: Wen, Andrew, Elwazir, Mohamed Y, Moon, Sungrim, Fan, Jungwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:ObjectivesTo adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.Materials and MethodsBidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis.ResultsThe best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy.DiscussionThe error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions.ConclusionThe BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction.
ISSN:2574-2531
2574-2531
DOI:10.1093/jamiaopen/ooz072