Leveraging large language models for efficient representation learning for entity resolution
In this paper, the authors propose TriBERTa, a supervised entity resolution system that utilizes a pre-trained large language model and a triplet loss function to learn representations for entity matching. The system consists of two steps: first, name entity records are fed into a Sentence Bidirecti...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, the authors propose TriBERTa, a supervised entity resolution
system that utilizes a pre-trained large language model and a triplet loss
function to learn representations for entity matching. The system consists of
two steps: first, name entity records are fed into a Sentence Bidirectional
Encoder Representations from Transformers (SBERT) model to generate vector
representations, which are then fine-tuned using contrastive learning based on
a triplet loss function. Fine-tuned representations are used as input for
entity matching tasks, and the results show that the proposed approach
outperforms state-of-the-art representations, including SBERT without
fine-tuning and conventional Term Frequency-Inverse Document Frequency
(TF-IDF), by a margin of 3 - 19%. Additionally, the representations generated
by TriBERTa demonstrated increased robustness, maintaining consistently higher
performance across a range of datasets. The authors also discussed the
importance of entity resolution in today's data-driven landscape and the
challenges that arise when identifying and reconciling duplicate data across
different sources. They also described the ER process, which involves several
crucial steps, including blocking, entity matching, and clustering. |
---|---|
DOI: | 10.48550/arxiv.2411.10629 |