ATPF: An Adaptive Temporal Perturbation Framework for Adversarial Attacks on Temporal Knowledge Graph

Robustness is paramount for ensuring the reliability of knowledge graph models in safety-sensitive applications. While recent research has delved into adversarial attacks on static knowledge graph models, the exploration of more practical temporal knowledge graphs has been largely overlooked. To fil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2025-03, Vol.37 (3), p.1-14
Hauptverfasser: Liao, Longquan, Zheng, Linjiang, Shang, Jiaxing, Li, Xu, Chen, Fengwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Robustness is paramount for ensuring the reliability of knowledge graph models in safety-sensitive applications. While recent research has delved into adversarial attacks on static knowledge graph models, the exploration of more practical temporal knowledge graphs has been largely overlooked. To fill this gap, we present the Adaptive Temporal Perturbation Framework (ATPF), a novel adversarial attack framework aimed at probing the robustness of temporal knowledge graph (TKG) models. The general idea of ATPF is to inject perturbations into the victim model input to undermine the prediction. Firstly, we propose the Temporal Perturbation Prioritization (TPP) algorithm, which identifies the optimal time sequence for perturbation injection before initiating attacks. Subsequently, we design the Rank-Based Edge Manipulation (RBEM) algorithm, enabling the generation of both edge addition and removal perturbations under black-box setting. With ATPF, we present two adversarial attack methods: the stringent ATPF-hard and the more lenient ATPF-soft, each imposing different perturbation constraints. Our experimental evaluations on the link prediction task for TKGs demonstrate the superior attack performance of our methods compared to baseline methods. Furthermore, we find that strategically placing a single perturbation often suffices to successfully compromise a target link.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3510689