Nonlinear Unmixing via Deep Autoencoder Networks for Generalized Bilinear Model
Hyperspectral unmixing decomposes the observed mixed spectra into a collection of constituent pure material signatures and the associated fractional abundances. Because of the universal modeling ability of neural networks, deep learning (DL) techniques are gaining prominence in solving hyperspectral...
Gespeichert in:
Veröffentlicht in: | Remote sensing (Basel, Switzerland) Switzerland), 2022-10, Vol.14 (20), p.5167 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Hyperspectral unmixing decomposes the observed mixed spectra into a collection of constituent pure material signatures and the associated fractional abundances. Because of the universal modeling ability of neural networks, deep learning (DL) techniques are gaining prominence in solving hyperspectral analysis tasks. The autoencoder (AE) network has been extensively investigated in linear blind source unmixing. However, the linear mixing model (LMM) may fail to provide good unmixing performance when the nonlinear mixing effects are nonnegligible in complex scenarios. Considering the limitations of LMM, we propose an unsupervised nonlinear spectral unmixing method, based on autoencoder architecture. Firstly, a deep neural network is employed as the encoder to extract the low-dimension feature of the mixed pixel. Then, the generalized bilinear model (GBM) is used to design the decoder, which has a linear mixing part and a nonlinear mixing one. The coefficient of the bilinear mixing part can be adjusted by a set of learnable parameters, which makes the method perform well on both nonlinear and linear data. Finally, some regular terms are imposed on the loss function and an alternating update strategy is utilized to train the network. Experimental results on synthetic and real datasets verify the effectiveness of the proposed model and show very competitive performance compared with several existing algorithms. |
---|---|
ISSN: | 2072-4292 2072-4292 |
DOI: | 10.3390/rs14205167 |