Spatial-Aware Dictionary Learning for Hyperspectral Image Classification
This paper presents a structured dictionary-based model for hyperspectral data that incorporates both spectral and contextual characteristics of spectral samples. The idea is to partition the pixels of a hyperspectral image into a number of spatial neighborhoods called contextual groups and to model...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2015-01, Vol.53 (1), p.527-541 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a structured dictionary-based model for hyperspectral data that incorporates both spectral and contextual characteristics of spectral samples. The idea is to partition the pixels of a hyperspectral image into a number of spatial neighborhoods called contextual groups and to model the pixels inside a group as members of a common subspace. That is, each pixel is represented using a linear combination of a few dictionary elements learned from the data, but since pixels inside a contextual group are often made up of the same materials, their linear combinations are constrained to use common elements from the dictionary. To this end, dictionary learning is carried out with a joint sparse regularizer to induce a common sparsity pattern in the sparse coefficients of a contextual group. The sparse coefficients are then used for classification using a linear support vector machine. Experimental results on a number of real hyperspectral images confirm the effectiveness of the proposed representation for hyperspectral image classification. Moreover, experiments with simulated multispectral data show that the proposed model is capable of finding representations that may effectively be used for classification of multispectral resolution samples. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2014.2325067 |