Multi-Component Fusion Network for Small Object Detection in Remote Sensing Images
Small object detection is a major challenge in the field of object detection. With the development of deep learning, many methods based on deep convolutional neural networks (DCNNs) have greatly improved the speed of detection while ensuring accuracy. However, due to the contradiction between the sp...
Gespeichert in:
Veröffentlicht in: | IEEE access 2019, Vol.7, p.128339-128352 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Small object detection is a major challenge in the field of object detection. With the development of deep learning, many methods based on deep convolutional neural networks (DCNNs) have greatly improved the speed of detection while ensuring accuracy. However, due to the contradiction between the spatial details and semantic information of DCNNs, previous deep learning methods often meet problems when detecting small objects. The challenge can be more serious in complex scenes involving similar background objects and/or occlusion, such as in remote sensing imagery. In this paper, we propose an end-to-end DCNN called the multi-component fusion network (MCFN) to improve the accuracy of small object detection in such cases. First, we propose a dual pyramid fusion network, which densely concatenates spatial information and semantic information to extract small object features via encoding and decoding operations. Then we use a relative region proposal network to adequately extract the features of small objects samples and parts of objects. Finally, to achieve robustness against background disturbance, we add contextual information to the proposal regions before final detection. Experimental evaluations demonstrate that the proposed method significantly improves the accuracy of object detection in remote sensing images compared with other state-of-the-art methods, especially in complex scenes with the conditions of occlusion. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2939488 |