Robust Visual Knowledge Transfer via Extreme Learning Machine-Based Domain Adaptation

We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2016-10, Vol.25 (10), p.4959-4973
Hauptverfasser: Lei Zhang, Zhang, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the ℓ 2,1 -norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2016.2598679