Sparse Spatial Coding: A Novel Approach to Visual Recognition
Successful image-based object recognition techniques have been constructed founded on powerful techniques such as sparse representation, in lieu of the popular vector quantization approach. However, one serious drawback of sparse space-based methods is that local features that are quite similar can...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2014-06, Vol.23 (6), p.2719-2731 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Successful image-based object recognition techniques have been constructed founded on powerful techniques such as sparse representation, in lieu of the popular vector quantization approach. However, one serious drawback of sparse space-based methods is that local features that are quite similar can be quantized into quite distinct visual words. We address this problem with a novel approach for object recognition, called sparse spatial coding, which efficiently combines a sparse coding dictionary learning and spatial constraint coding stage. We performed experimental evaluation using the Caltech 101, Caltech 256, Corel 5000, and Corel 10000 data sets, which were specifically designed for object recognition evaluation. Our results show that our approach achieves high accuracy comparable with the best single feature method previously published on those databases. Our method outperformed, for the same bases, several multiple feature methods, and provided equivalent, and in few cases, slightly less accurate results than other techniques specifically designed to that end. Finally, we report state-of-the-art results for scene recognition on COsy Localization Dataset (COLD) and high performance results on the MIT-67 indoor scene recognition, thus demonstrating the generalization of our approach for such tasks. |
---|---|
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2014.2317988 |