IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks
Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications. However, the graph data used for training may contain sensitive personal information of the involved individuals. Once trained, GNNs typically encode such information in their learnable parameters. As a conse...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of
applications. However, the graph data used for training may contain sensitive
personal information of the involved individuals. Once trained, GNNs typically
encode such information in their learnable parameters. As a consequence,
privacy leakage may happen when the trained GNNs are deployed and exposed to
potential attackers. Facing such a threat, machine unlearning for GNNs has
become an emerging technique that aims to remove certain personal information
from a trained GNN. Among these techniques, certified unlearning stands out, as
it provides a solid theoretical guarantee of the information removal
effectiveness. Nevertheless, most of the existing certified unlearning methods
for GNNs are only designed to handle node and edge unlearning requests.
Meanwhile, these approaches are usually tailored for either a specific design
of GNN or a specially designed training objective. These disadvantages
significantly jeopardize their flexibility. In this paper, we propose a
principled framework named IDEA to achieve flexible and certified unlearning
for GNNs. Specifically, we first instantiate four types of unlearning requests
on graphs, and then we propose an approximation approach to flexibly handle
these unlearning requests over diverse GNNs. We further provide theoretical
guarantee of the effectiveness for the proposed approach as a certification.
Different from existing alternatives, IDEA is not designed for any specific
GNNs or optimization objectives to perform certified unlearning, and thus can
be easily generalized. Extensive experiments on real-world datasets demonstrate
the superiority of IDEA in multiple key perspectives. |
---|---|
DOI: | 10.48550/arxiv.2407.19398 |