Information granule optimization and co-training based on kernel method
Co-training was originally designed for multi-view data. Subsequent theoretical research has extended co-training to the application of single-view data. The construction of a feature subspace is one of the methods to expand single-view data into multi-view data, so the establishment of a feature su...
Gespeichert in:
Veröffentlicht in: | Applied soft computing 2024-06, Vol.158, p.111584, Article 111584 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Co-training was originally designed for multi-view data. Subsequent theoretical research has extended co-training to the application of single-view data. The construction of a feature subspace is one of the methods to expand single-view data into multi-view data, so the establishment of a feature subspace is the key to this approach. In this paper, the kernel method is used to form an implicit feature subspace, to perform multi-view co-training on single-view data. Because the attribute value in the feature subspace is not known, the subspace is a pseudo-view. To make the view adapt to the base classifier, this paper uses the neighborhood classifier as the base classifier and proposes an adaptive kernel function and three kernel parameter optimization methods according to the characteristics of the neighborhood classifier to build the feature subspace adapted to the neighborhood classifier. The decision of the neighborhood classifier needs to be made based on information granules generated by unlabeled objects. In the iteration of the feature subspace, we can continuously learn and optimize the information granule, and finally form what we expect, to get the implicit feature space corresponding to the granule and improve the accuracy of the base classifier. Finally, taking five data sets from UCI, and using accuracy and F1-score as evaluation indicators, we conduct a comparative experiment. The experimental results show that an adaptive kernel function, three kernel parameter optimization methods, and the co-training method presented in this paper are effective.
•Three-way decision based on income-cost sensitivity.•Adaptive kernel function for neighborhood information granule.•Kernel parameter optimization methods for neighborhood information granule.•Multi-pseudo-view granular structure co-training based on kernel method. |
---|---|
ISSN: | 1568-4946 1872-9681 |
DOI: | 10.1016/j.asoc.2024.111584 |