NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs
Graph Neural Networks (GNNs) have achieved promising results for semi-supervised learning tasks on graphs such as node classification. Despite the great success of GNNs, many real-world graphs are often sparsely and noisily labeled, which could significantly degrade the performance of GNNs, as the n...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) have achieved promising results for
semi-supervised learning tasks on graphs such as node classification. Despite
the great success of GNNs, many real-world graphs are often sparsely and
noisily labeled, which could significantly degrade the performance of GNNs, as
the noisy information could propagate to unlabeled nodes via graph structure.
Thus, it is important to develop a label noise-resistant GNN for
semi-supervised node classification. Though extensive studies have been
conducted to learn neural networks with noisy labels, they mostly focus on
independent and identically distributed data and assume a large number of noisy
labels are available, which are not directly applicable for GNNs. Thus, we
investigate a novel problem of learning a robust GNN with noisy and limited
labels. To alleviate the negative effects of label noise, we propose to link
the unlabeled nodes with labeled nodes of high feature similarity to bring more
clean label information. Furthermore, accurate pseudo labels could be obtained
by this strategy to provide more supervision and further reduce the effects of
label noise. Our theoretical and empirical analysis verify the effectiveness of
these two strategies under mild conditions. Extensive experiments on real-world
datasets demonstrate the effectiveness of the proposed method in learning a
robust GNN with noisy and limited labels. |
---|---|
DOI: | 10.48550/arxiv.2106.04714 |