Learning Granularity Representation for Temporal Knowledge Graph Completion
Temporal Knowledge Graphs (TKGs) incorporate temporal information to reflect the dynamic structural knowledge and evolutionary patterns of real-world facts. Nevertheless, TKGs are still limited in downstream applications due to the problem of incompleteness. Consequently, TKG completion (also known...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Temporal Knowledge Graphs (TKGs) incorporate temporal information to reflect
the dynamic structural knowledge and evolutionary patterns of real-world facts.
Nevertheless, TKGs are still limited in downstream applications due to the
problem of incompleteness. Consequently, TKG completion (also known as link
prediction) has been widely studied, with recent research focusing on
incorporating independent embeddings of time or combining them with entities
and relations to form temporal representations. However, most existing methods
overlook the impact of history from a multi-granularity aspect. The inherent
semantics of human-defined temporal granularities, such as ordinal dates,
reveal general patterns to which facts typically adhere. To counter this
limitation, this paper proposes \textbf{L}earning \textbf{G}ranularity
\textbf{Re}presentation (termed $\mathsf{LGRe}$) for TKG completion. It
comprises two main components: Granularity Representation Learning (GRL) and
Adaptive Granularity Balancing (AGB). Specifically, GRL employs time-specific
multi-layer convolutional neural networks to capture interactions between
entities and relations at different granularities. After that, AGB generates
adaptive weights for these embeddings according to temporal semantics,
resulting in expressive representations of predictions. Moreover, to reflect
similar semantics of adjacent timestamps, a temporal loss function is
introduced. Extensive experimental results on four event benchmarks demonstrate
the effectiveness of $\mathsf{LGRe}$ in learning time-related representations.
To ensure reproducibility, our code is available at
https://github.com/KcAcoZhang/LGRe. |
---|---|
DOI: | 10.48550/arxiv.2408.15293 |