Fine-Grained Relation Extraction for Drug Instructions Using Contrastive Entity Enhancement

The extraction of relations between drug-related entities from drug instructions is essential for clinical diagnostic decision-making and drug use regulations, which is a critical task. However, due to the complexity of the textual descriptions in drug instructions, it is challenging to extract fine...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.51777-51788
Hauptverfasser: Gao, Feng, Song, Xuren, Gu, Jinguang, Zhang, Lihua, Liu, Yun, Zhang, Xiaoliang, Liu, Yu, Jing, Shenqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The extraction of relations between drug-related entities from drug instructions is essential for clinical diagnostic decision-making and drug use regulations, which is a critical task. However, due to the complexity of the textual descriptions in drug instructions, it is challenging to extract fine-grained relations, even with a considerable amount of training data. Moreover, since manually-labeled, high-quality datasets in the pharmaceutical domain are typically expensive, obtaining an extensive and accurate training dataset could be challenging. To overcome the above challenges, this paper proposes a drug relation extraction framework that combines entity information enhancement and contrastive feature learning, which can better extract fine-grained relations with limited data. More specifically, a sample generator creates a group of different samples with role semantic information from the training set, an entity encoder embeds the entity role information and context information to enhance the semantic representation, and a contrastive learning module employs a hybrid loss function to learn inter-sample and intra-sample differences. Empirical study indicates that the contrastive entity enhancement approach can achieve higher extraction accuracy and has better generalization capability. More specifically, the experimental results show that the F1 value of the model can reach 0.8892, which provides a 7.13% improvement compared to the baseline pre-training method.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3279288