A Residual-Dyad Encoder Discriminator Network for Remote Sensing Image Matching

We propose a new method for remote sensing image matching. The proposed method uses an encoder subnetwork of an autoencoder pretrained on the GTCrossView data to construct image features. A discriminator network trained on the University of California Merced land-use/land-cover data set (LandUse) an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2020-03, Vol.58 (3), p.2001-2014
Hauptverfasser: Khurshid, Numan, Tharani, Mohbat, Taj, Murtaza, Qureshi, Faisal Z.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a new method for remote sensing image matching. The proposed method uses an encoder subnetwork of an autoencoder pretrained on the GTCrossView data to construct image features. A discriminator network trained on the University of California Merced land-use/land-cover data set (LandUse) and the high-resolution satellite scene data set (SatScene) computes a match score between a pair of computed image features. We also propose a new network unit, called residual-dyad, and empirically demonstrate that networks that use residual-dyad units outperform those that do not. We compare our approach with both traditional and more recent learning-based schemes on the LandUse and SatScene data sets, and the proposed method achieves the state-of-the-art result in terms of mean average precision and average normalized modified retrieval rank (ANMRR) metrics. Specifically, our method achieves an overall improvement in performance of 11.26% and 22.41%, respectively, for LandUse and SatScene benchmark data sets.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2019.2951820