Confidence correction for trained graph convolutional networks

Adopting Graph Convolutional Networks (GCNs) for transductive node classification is a hot research direction in artificial intelligence. Vanilla GCNs are primarily under-confident and struggle to clarify the final classification results explicitly due to the lack of supervision. Existing works main...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2024-12, Vol.156, p.110773, Article 110773
Hauptverfasser: Yuan, Junqing, Guo, Huanlei, Zhou, Chenyi, Ding, Jiajun, Kuang, Zhenzhong, Yu, Zhou, Liu, Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Adopting Graph Convolutional Networks (GCNs) for transductive node classification is a hot research direction in artificial intelligence. Vanilla GCNs are primarily under-confident and struggle to clarify the final classification results explicitly due to the lack of supervision. Existing works mainly alleviated this issue by improving annotation deficiency and introducing addition regularization terms. However, these methods need to re-train the model from the beginning, which is computationally expensive for large dataset and model. To deal with this problem, a novel confidence correction mechanism (CCM) for trained GCNs is proposed in this work. Such mechanism aims at calibrating the confidence output of each node in the inference stage by jointly inferring the feature and predicted pseudo label. Specifically, in the inference stage, it uses the predicted pseudo label to select target-related features over all network to obtain a more confident and better result. Such selectivity is formulated as an optimization problem to maximize the category score of each node. In addition, the greedy optimization strategy is utilized to solve this problem and we have mathematically proven that the proposed mechanism can reach the local optimum by mathematical induction. Note that such mechanism is flexible and can be introduced to most GCN-based model. Extensive experimental results on benchmark datasets show that the proposed method can promote the confidence of the final target category and improve the performance of GCNs in the inference stage. •A simple and effective mechanism for the under-confidence problem of existing GCNs.•CCM is formulated as an optimization problem and is solved by greedy strategy.•CCM is model-agnostic and can be applied on most of existing GCNs.•CCM can help to calibrate the confidence and improve the classification accuracy.
ISSN:0031-3203
DOI:10.1016/j.patcog.2024.110773