FRCFNet: Feature Reassembly and Context Information Fusion Network for Road Extraction
Existing road extraction methods based on very high resolution (VHR) satellite imagery suffer from insufficient multidimensional feature expression and difficulty capturing global context. We propose a grouping multidimensional feature reassembly (GMFR) module, performing channel, height, and width...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing road extraction methods based on very high resolution (VHR) satellite imagery suffer from insufficient multidimensional feature expression and difficulty capturing global context. We propose a grouping multidimensional feature reassembly (GMFR) module, performing channel, height, and width reassembly of multiscale features between network layers via gating to focus on valid information. Given the distinct geometric structure of roads, we propose a novel module, multidirectional context information fusion (MCIF), utilizing four strip convolutions to capture the long-distance context in various directions within VHR images. It aggregates global information through two pooling branches. Based on these, we designed a road extraction network, FRCFNet, with an encoder-decoder structure and skip connections. The proposed network efficiently fuses multiscale features while capturing global context from various directions and reducing complexity. Experimental results show that the proposed method achieves 68.97% and 80.23\%~F1 -score on CHN6-CUG and DeepGlobe datasets, respectively, outperforming other comparison methods. The code will be posted at https://github.com/CHD-IPAC/FRCFNet . |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2024.3401728 |