Partial Label Learning via Gaussian Processes

Partial label learning (PL) is a new weakly supervised machine learning framework that addresses the problems where each training sample is associated with a candidate set of its actual label. Since precisely-labeled data are usually expensive and hard to obtain in practice, PL can be widely used in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2017-12, Vol.47 (12), p.4443-4450
Hauptverfasser: Zhou, Yu, He, Jianjun, Gu, Hong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Partial label learning (PL) is a new weakly supervised machine learning framework that addresses the problems where each training sample is associated with a candidate set of its actual label. Since precisely-labeled data are usually expensive and hard to obtain in practice, PL can be widely used in many real-world tasks. However, as the ambiguity in training data inevitably makes such learning framework difficult to address, only a few algorithms are available so far. In this paper, a new probabilistic kernel algorithm is proposed by employing the Gaussian process model. The main idea is to assume an unobservable latent function with the Gaussian process prior on feature space for each class label. Then a new likelihood function is defined to disambiguate the ambiguous labeling information conveyed by the training data. By introducing the aggregate function to approximate the max(·) function involved in likelihood function, not only is a likelihood function equivalent to the maxloss function defined, which has been proved to be tighter than other loss functions, but also a differentiable convex objective function is presented. The experimental results on six UCI data sets and three real-world PL problems show that the proposed algorithm can get higher accuracy than the state-of-the-art PL algorithms.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2016.2611534