Exploiting low dimensional features from the MobileNets for remote sensing image retrieval

Generally, traditional convolutional neural networks (CNN) models require a long training time and output high-dimensional features for content-based remote sensing image retrieval (CBRSIR). This paper aims to examine the retrieval performance of the MobileNets model and fine-tune it by changing the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Earth science informatics 2020-12, Vol.13 (4), p.1437-1443
Hauptverfasser: Hou, Dongyang, Miao, Zelang, Xing, Huaqiao, Wu, Hao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Generally, traditional convolutional neural networks (CNN) models require a long training time and output high-dimensional features for content-based remote sensing image retrieval (CBRSIR). This paper aims to examine the retrieval performance of the MobileNets model and fine-tune it by changing the dimensions of the final fully connected layer to learn low dimensional representations for CBRSIR. Experimental results show that the MobileNets model achieves the best retrieval performance in term of retrieval accuracy and training speed, and the improvement of mean average precision is between 11.2% and 44.39% compared with the next best model ResNet152. Besides, 32-dimensional features of the fine-tuning MobileNet reach better retrieval performance than the original MobileNets and the principal component analysis method, and the maximum improvement of mean average precision is 11.56% and 9.8%, respectively. Overall, the MobileNets and the proposed fine-tuning models are simple, but they can indeed greatly improve retrieval performance compared with the commonly used CNN models.
ISSN:1865-0473
1865-0481
DOI:10.1007/s12145-020-00484-3