LaplaceNet: A Hybrid Graph-Energy Neural Network for Deep Semisupervised Classification

Semisupervised learning (SSL) has received a lot of recent attention as it alleviates the need for large amounts of labeled data which can often be expensive, requires expert knowledge, and be time consuming to collect. Recent developments in deep semisupervised classification have reached unprecede...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-04, Vol.35 (4), p.5306-5318
Hauptverfasser: Sellars, Philip, Aviles-Rivero, Angelica I., Schonlieb, Carola-Bibiane
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Semisupervised learning (SSL) has received a lot of recent attention as it alleviates the need for large amounts of labeled data which can often be expensive, requires expert knowledge, and be time consuming to collect. Recent developments in deep semisupervised classification have reached unprecedented performance and the gap between supervised and SSL is ever-decreasing. This improvement in performance has been based on the inclusion of numerous technical tricks, strong augmentation techniques, and costly optimization schemes with multiterm loss functions. We propose a new framework, LaplaceNet, for deep semisupervised classification that has a greatly reduced model complexity. We utilize a hybrid approach where pseudolabels are produced by minimizing the Laplacian energy on a graph. These pseudolabels are then used to iteratively train a neural-network backbone. Our model outperforms state-of-the-art methods for deep semisupervised classification, over several benchmark datasets. Furthermore, we consider the application of strong augmentations to neural networks theoretically and justify the use of a multisampling approach for SSL. We demonstrate, through rigorous experimentation, that a multisampling augmentation approach improves generalization and reduces the sensitivity of the network to augmentation.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2022.3203315