Robust AUC optimization under the supervision of clean data
AUC (area under the ROC curve) is an essential metric that has been extensively researched in the field of machine learning. Traditional AUC optimization methods need a large-scale clean dataset, while real-world datasets usually contain massive noisy samples. To reduce the impact of noisy samples,...
Gespeichert in:
Veröffentlicht in: | Scientific reports 2024-07, Vol.14 (1), p.16693-10, Article 16693 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AUC (area under the ROC curve) is an essential metric that has been extensively researched in the field of machine learning. Traditional AUC optimization methods need a large-scale clean dataset, while real-world datasets usually contain massive noisy samples. To reduce the impact of noisy samples, many robust AUC optimization methods have been proposed. However, these methods only use noisy data and ignore the effect of clean data. To make full use of clean data and noisy data, in this paper, we propose a new framework for AUC optimization which uses clean samples to guide the processing of the noisy dataset based on the technology of self-paced learning (SPL). Innovatively, we introduce the consistency regularization term to reduce the negative impact of the data enhancement technology on SPL. Traditional SPL methods usually suffer from the high complexity of alternately solving the two critical sub-problems with respect to sample weights and model parameters. To speed up the training process, we propose a new efficient algorithm to solve our problem, which alternately updates sample weights and model parameters with the stochastic gradient method. Theoretically, we prove that our new optimization method can converge to a stationary point. Comprehensive experiments demonstrate that our robust AUC optimization (RAUCO) algorithm holds better robustness than existing algorithms. |
---|---|
ISSN: | 2045-2322 2045-2322 |
DOI: | 10.1038/s41598-024-66788-2 |