Arbitrarily Oriented Object Detection in Remote Sensing Images Based on Improved YOLOv4-CSP
Arbitrarily oriented object detection in remote sensing images is a challenging task. At present, most of the algorithms are dedicated to improving the detection accuracy, while ignoring the detection speed. In order to further improve the detection accuracy and provide a more efficient model for sc...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in applied earth observations and remote sensing 2022, Vol.15, p.1-15 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Arbitrarily oriented object detection in remote sensing images is a challenging task. At present, most of the algorithms are dedicated to improving the detection accuracy, while ignoring the detection speed. In order to further improve the detection accuracy and provide a more efficient model for scenes that require real-time detection, we propose an improved YOLOv4-CSP network for rotating object detection in remote sensing images. There are mainly three contributions in our approach. First, we design a new bounding box regression loss function, which is distance and angle-intersection over union (DAIoU). This loss function is formed by adding a distance penalty term and an angle penalty term on the basis of intersection over union. It is suitable for arbitrarily oriented object detection networks. Second, we develop an adaptive angle setting method for anchors based on k-means clustering algorithm. This method can obtain representative angles for better representing the distribution of the angle set. By assigning representative angles to all anchors for training, it is beneficial to reduce the complexity of the network to adjust anchors to GT bounding boxes. Finally, we improve the YOLOV4-CSP network and make it suitable for detection scenarios based on rotated anchors by applying rotation transformations. We combine the above methods and use the final network to perform the detection task. The experimental results on three remote sensing datasets, i.e., HRSC2016, UCAS-AOD, and SSDD+, validate the effectiveness of our method. Comparison results with state-of-the-arts methods demonstrate that our method can be used to significantly improve the detection accuracy with a higher detection speed. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2022.3214541 |