ORSI Salient Object Detection via Cross-scale Interaction and Enlarged Receptive Field

Due to the diversity of scales and shapes, the uncertainty of object position, and the complexity of edge details, the recent merging problem of salient object detection in optical remote sensing images (RSI-SOD) is a considerably challenging topic. To cope with the challenges, we propose a new cros...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2023-01, Vol.20, p.1-1
Hauptverfasser: Zheng, Jianwei, Quan, Yueqian, Zheng, Hang, Wang, Yibin, Pan, Xiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Due to the diversity of scales and shapes, the uncertainty of object position, and the complexity of edge details, the recent merging problem of salient object detection in optical remote sensing images (RSI-SOD) is a considerably challenging topic. To cope with the challenges, we propose a new cross-scale interaction network (CIFNet) equipped with the enlarged receptive field, which mainly contains three modules in an encoder-decoder architecture, including a furcate skip connection module, a global leading attention Module, and an expansion-integration module. First, FSCM uses dilated convolutions to enlarge the receptive field and furcate skip connections to capture more multi-scale contextual information, both of which facilitate the adaptability of the model to different sizes, shapes, and quantities of the target objects. Second, on the low-resolution branch, GLM locates the potentially significant object positions in the feature map from a global semantic perspective. Finally, through an attention-guided cascade structure, EIM seeks more delicate characteristics by refining the features in a coarse-to-fine fashion. Extensive experiments are conducted on two RSI-SOD datasets, from which superior results can be achieved by our CIFNet, outperforming the other state-of-the-art methods. Compared with the second-best method, the performance gain of our method reaches 3.45% on MAE and 1.38% on F adp β . Notably, the proposed CIF-Net runs with 40.40M parameters, 14.8 GFLOPs computational complexity, and 58 fps inference speed, which guarantees high efficiency.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2023.3249764