Lightweight Salient Object Detection in Optical Remote-Sensing Images via Semantic Matching and Edge Alignment

Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote-sensing images (ORSI-SOD) are proposed. However, most methods ignore the number of parameters and computational cost brought by CNNs, and only a few pay attention to portability and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2023, Vol.61, p.1-11
Hauptverfasser: Li, Gongyang, Liu, Zhi, Zhang, Xinpeng, Lin, Weisi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote-sensing images (ORSI-SOD) are proposed. However, most methods ignore the number of parameters and computational cost brought by CNNs, and only a few pay attention to portability and mobility. To facilitate practical applications, in this article, we propose a novel lightweight network for ORSI-SOD based on semantic matching and edge alignment, termed SeaNet. Specifically, SeaNet includes a lightweight MobileNet-V2 for feature extraction, a dynamic semantic matching module (DSMM) for high-level features, an edge self-alignment module (ESAM) for low-level features, and a portable decoder for inference. First, the high-level features are compressed into semantic kernels. Then, semantic kernels are used to activate salient object locations in two groups of high-level features through dynamic convolution operations in DSMM. Meanwhile, in ESAM, cross-scale edge information extracted from two groups of low-level features is self-aligned through L_{2} loss and used for detail enhancement. Finally, starting from the highest level features, the decoder infers salient objects based on the accurate locations and fine details contained in the outputs of the two modules. Extensive experiments on two public datasets demonstrate that our lightweight SeaNet not only outperforms most state-of-the-art lightweight methods, but also yields comparable accuracy with state-of-the-art conventional methods, while having only 2.76 M parameters and running with 1.7 G floating point operations (FLOPs) for 288 \times 288 inputs. Our code and results are available at https://github.com/MathLee/SeaNet .
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2023.3235717