An Easy Partition Approach for Joint Entity and Relation Extraction
The triplet extraction (TE) task aims to identify the entities and relations mentioned in a given text. TE consists of two tasks: named entity recognition (NER) and relation classification (RC). Previous work has either treated TE as two separate tasks with independent encoders, or as a single task...
Gespeichert in:
Veröffentlicht in: | Applied sciences 2023-07, Vol.13 (13), p.7585 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The triplet extraction (TE) task aims to identify the entities and relations mentioned in a given text. TE consists of two tasks: named entity recognition (NER) and relation classification (RC). Previous work has either treated TE as two separate tasks with independent encoders, or as a single task with a unified encoder. However, both approaches have limitations in capturing the interaction and independence of the features for different subtasks. In this paper, we propose a simple and direct feature selection and interaction scheme. Specifically, we use a pretraining language model (e.g., BERT) to extract various features, including entity recognition, shared, and relation classification features. To capture the interaction, shared features consist of the common semantic information used by the two tasks simultaneously. We use a gate module to obtain the task-specific features. Experimental results on various public benchmarks show that our proposed method can achieve competitive performance, and the calculation speed of our model is seven times faster than CasRel, and two times faster than PFN. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app13137585 |