VV-YOLO: A Vehicle View Object Detection Model Based on Improved YOLOv4

Vehicle view object detection technology is the key to the environment perception modules of autonomous vehicles, which is crucial for driving safety. In view of the characteristics of complex scenes, such as dim light, occlusion, and long distance, an improved YOLOv4-based vehicle view object detec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2023-03, Vol.23 (7), p.3385
Hauptverfasser: Wang, Yinan, Guan, Yingzhou, Liu, Hanxu, Jin, Lisheng, Li, Xinwei, Guo, Baicang, Zhang, Zhe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Vehicle view object detection technology is the key to the environment perception modules of autonomous vehicles, which is crucial for driving safety. In view of the characteristics of complex scenes, such as dim light, occlusion, and long distance, an improved YOLOv4-based vehicle view object detection model, VV-YOLO, is proposed in this paper. The VV-YOLO model adopts the implementation mode based on anchor frames. In the anchor frame clustering, the improved K-means++ algorithm is used to reduce the possibility of instability in anchor frame clustering results caused by the random selection of a cluster center, so that the model can obtain a reasonable original anchor frame. Firstly, the CA-PAN network was designed by adding a coordinate attention mechanism, which was used in the neck network of the VV-YOLO model; the multidimensional modeling of image feature channel relationships was realized; and the extraction effect of complex image features was improved. Secondly, in order to ensure the sufficiency of model training, the loss function of the VV-YOLO model was reconstructed based on the focus function, which alleviated the problem of training imbalance caused by the unbalanced distribution of training data. Finally, the KITTI dataset was selected as the test set to conduct the index quantification experiment. The results showed that the precision and average precision of the VV-YOLO model were 90.68% and 80.01%, respectively, which were 6.88% and 3.44% higher than those of the YOLOv4 model, and the model's calculation time on the same hardware platform did not increase significantly. In addition to testing on the KITTI dataset, we also selected the BDD100K dataset and typical complex traffic scene data collected in the field to conduct a visual comparison test of the results, and then the validity and robustness of the VV-YOLO model were verified.
ISSN:1424-8220
1424-8220
DOI:10.3390/s23073385