Global-Margin Uncertainty and Collaborative Sampling for Active Learning in Complex Aerial Images Object Detection

Object detection in aerial images based on deep learning requires a large amount of labeled data, whereas manual annotation of aerial images is time-consuming and laborious. As a branch of machine learning, active learning can help humans find valuable samples by designing some corresponding query s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5
Hauptverfasser: Zhu, Dongjun, Gu, Chengjie, Zhang, Junjun, Yao, Yuyou, Tan, Dayu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Object detection in aerial images based on deep learning requires a large amount of labeled data, whereas manual annotation of aerial images is time-consuming and laborious. As a branch of machine learning, active learning can help humans find valuable samples by designing some corresponding query strategies, which effectively reduces the cost of manual labeling. However, objects in aerial images are usually small, dense, and accompanied by interference from complex backgrounds. These bring considerable challenges for active learning in selecting high-value aerial image samples. Currently, there is a relative lack of study on active learning for aerial image object detection. Therefore, this letter proposes a novel active learning method, using global-margin uncertainty (GMU) and collaborative sampling (CS) to find out the highly valuable aerial image samples to reduce the annotation cost and improve the training efficiency of models. In GMU, the predicted scores of categories are applied to calculate the global uncertainty and margin uncertainty of unlabeled aerial images and then those aerial images with high uncertainty scores are selected as the candidate samples. In CS, we train a main model and an auxiliary model, respectively, to detect the candidate samples, where the samples with large differences in detection results of the two models are selected for manual annotation. The experiments conducted on VisDrone2019 and DOTA-v1.5 datasets show that the proposed method has a better performance compared with several state-of-the-art active learning methods.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2024.3373038