Refined Extraction Of Building Outlines From High-Resolution Remote Sensing Imagery Based on a Multifeature Convolutional Neural Network and Morphological Filtering

The automatic extraction of building outlines from high-resolution images is an important and challenging task. Convolutional neural networks have shown excellent results compared with traditional building extraction methods because of their ability to extract high-level abstract features from image...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of selected topics in applied earth observations and remote sensing 2020, Vol.13, p.1842-1855
Hauptverfasser: Xie, Yakun, Zhu, Jun, Cao, Yungang, Feng, Dejun, Hu, Minjun, Li, Weilian, Zhang, Yunhao, Fu, Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The automatic extraction of building outlines from high-resolution images is an important and challenging task. Convolutional neural networks have shown excellent results compared with traditional building extraction methods because of their ability to extract high-level abstract features from images. However, it is difficult to fully utilize the multiple features of current building extraction methods; consequently, the resulting building boundaries are irregular. To overcome these limitations, we propose a method for extracting buildings from high-resolution images using a multifeature convolutional neural network (MFCNN) and morphological filtering. Our method consists of two steps. First, the MFCNN, which consists of residual connected unit, dilated perception unit, and pyramid aggregation unit, is used to achieve pixel-level segmentation of the buildings. Second, morphological filtering is used to optimize the building boundaries, improve the boundary regularity, and obtain refined building boundaries. The Massachusetts and Inria datasets are selected for experimental analysis. Under the same experimental conditions, the extraction results achieved with the proposed MFCNN are compared with the results of other deep learning models that have been commonly used in recent years: FCN-8s, SegNet, and U-Net. The results on both datasets reveal that the proposed model improves the F1-score by 3.31%-5.99%, increases the overall accuracy (OA) by 1.85%-3.07%, and increases the intersection over union (IOU) by 3.47%-8.82%. These findings illustrate that the proposed method is effective at extracting buildings from complex scenes.
ISSN:1939-1404
2151-1535
DOI:10.1109/JSTARS.2020.2991391