How and when to stop the co-training process

•Demonstrating the usage of model's outputs for overfitting or noise detection.•Retrieving a near-optimal co-training model without using a validation set.•Mostly co-training results cannot be improved further after a number of iterations.•Co-training has greater effect in transfer learning (TL...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2022-01, Vol.187, p.115841, Article 115841
Hauptverfasser: Grolman, Edita, Cohen, Dvir, Frenklach, Tatiana, Shabtai, Asaf, Puzis, Rami
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Demonstrating the usage of model's outputs for overfitting or noise detection.•Retrieving a near-optimal co-training model without using a validation set.•Mostly co-training results cannot be improved further after a number of iterations.•Co-training has greater effect in transfer learning (TL) than in non-TL scenarios. Co-training is a semi-supervised learning approach used when only a small set of the data that is available for training is labeled. By using multiple classifiers, the co-training process utilizes the small set of labeled data in order to label an additional set of samples. During this process, the classifiers gradually augment the training data in an iterative process in which a new co-training model is derived and used for labeling the unlabeled samples in each iteration. A few of the newly labeled samples are added in each iteration to the training dataset to improve the performance of the classifiers. The main challenge in applying co-training is to make sure that the co-trainer assigns accurate labels to the unlabeled samples. Many empirical studies showed that the performance (accuracy) of the co-trainer could not be further improved when a certain number of iterations was reached, and in some cases, the performance even declined if the process (i.e., labeling) continued. Despite this, no general solution has been suggested for identifying the optimal final co-training model or number of iterations before this decline. In this work, we propose a novel method aimed at selecting the near-optimal final co-training model among all models created in the various iterations according to a predefined measurement based solely on the unlabeled data. Experiments on nine open, publicly available and real-life datasets demonstrate that the proposed method outputs a near-optimal final co-training model compared to other co-training models created in the various iterations.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2021.115841