LEMON: LanguagE ModeL for Negative Sampling of Knowledge Graph Embeddings
Knowledge Graph Embedding models have become an important area of machine learning.Those models provide a latent representation of entities and relations in a knowledge graph which can then be used in downstream machine learning tasks such as link prediction. The learning process of such models can...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge Graph Embedding models have become an important area of machine
learning.Those models provide a latent representation of entities and relations
in a knowledge graph which can then be used in downstream machine learning
tasks such as link prediction. The learning process of such models can be
performed by contrasting positive and negative triples. While all triples of a
KG are considered positive, negative triples are usually not readily available.
Therefore, the choice of the sampling method to obtain the negative triples
play a crucial role in the performance and effectiveness of Knowledge Graph
Embedding models. Most of the current methods fetch negative samples from a
random distribution of entities in the underlying Knowledge Graph which also
often includes meaningless triples. Other known methods use adversarial
techniques or generative neural networks which consequently reduce the
efficiency of the process. In this paper, we propose an approach for generating
informative negative samples considering available complementary knowledge
about entities. Particularly, Pre-trained Language Models are used to form
neighborhood clusters by utilizing the distances between entities to obtain
representations of symbolic entities via their textual information. Our
comprehensive evaluations demonstrate the effectiveness of the proposed
approach on benchmark Knowledge Graphs with textual information for the link
prediction task. |
---|---|
DOI: | 10.48550/arxiv.2203.04703 |