Speculation detection for Chinese clinical notes: Impacts of word segmentation and embedding models
[Display omitted] •Chinese speculation detection is approached by a supervised sequence-labeling.•Embedding features can enhance system accuracy, especially word embedding.•Domain specific word segmentation is critical to Chinese speculation detection. Speculations represent uncertainty toward certa...
Gespeichert in:
Veröffentlicht in: | Journal of biomedical informatics 2016-04, Vol.60, p.334-341 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | [Display omitted]
•Chinese speculation detection is approached by a supervised sequence-labeling.•Embedding features can enhance system accuracy, especially word embedding.•Domain specific word segmentation is critical to Chinese speculation detection.
Speculations represent uncertainty toward certain facts. In clinical texts, identifying speculations is a critical step of natural language processing (NLP). While it is a nontrivial task in many languages, detecting speculations in Chinese clinical notes can be particularly challenging because word segmentation may be necessary as an upstream operation. The objective of this paper is to construct a state-of-the-art speculation detection system for Chinese clinical notes and to investigate whether embedding features and word segmentations are worth exploiting toward this overall task. We propose a sequence labeling based system for speculation detection, which relies on features from bag of characters, bag of words, character embedding, and word embedding. We experiment on a novel dataset of 36,828 clinical notes with 5103 gold-standard speculation annotations on 2000 notes, and compare the systems in which word embeddings are calculated based on word segmentations given by general and by domain specific segmenters respectively. Our systems are able to reach performance as high as 92.2% measured by F score. We demonstrate that word segmentation is critical to produce high quality word embedding to facilitate downstream information extraction applications, and suggest that a domain dependent word segmenter can be vital to such a clinical NLP task in Chinese language. |
---|---|
ISSN: | 1532-0464 1532-0480 |
DOI: | 10.1016/j.jbi.2016.02.011 |