Higher efficient YOLOv7: a one-stage method for non-salient object detection

Compared to the remarkable progress within the discipline of object detection in recent years, real-time detection of non-salient objects remains a challenging research task. However, most existing detection methods fail to adequately extract the global features of targets, leading to suboptimal per...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2024-04, Vol.83 (14), p.42257-42283
Hauptverfasser: Dong, Chengang, Tang, Yuhao, Zhang, Liyan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Compared to the remarkable progress within the discipline of object detection in recent years, real-time detection of non-salient objects remains a challenging research task. However, most existing detection methods fail to adequately extract the global features of targets, leading to suboptimal performance when dealing with non-salient objects. In this paper, we propose a unified framework called Higher efficient (He)-YOLOv7 to enhance the detection capability of YOLOv7 for non-salient objects.Firstly, we introduce an refined Squeeze and Excitation Network (SENet) to dynamically adjust the weights of feature channels, thereby enhancing the model's perception of non-salient objects. Secondly, we design an Angle Intersection over Union (AIoU) loss function that considers relative positional information, optimizing the widely used Complete Intersection over Union (CIoU) loss function in YOLOv7. This significantly accelerates the model's convergence. Moreover, He-YOLOv7 adopts a blended data augmentation strategy to simulate occlusion among objects, further improving the model's ability to filter out noise information and enhancing its robustness. Comparison of experimental results demonstrates a significant improvement of 2.4% mean Average Precision (mAP) on the Microsoft Common Objects in Context (MS COCO) dataset and a notable enhancement of 1.2% mAP on the PASCAL VOC dataset. Simultaneously, our approach demonstrates comparable performance to state-of-the-art real-time object detection methods.
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-17185-w