Nighttime vehicle detection algorithm based on image translation technology1
In order to address the problem of decreased accuracy in vehicle object detection models when facing low-light conditions in nighttime environments, this paper proposes a method to enhance the accuracy and precision of object detection by using the image translation technology based on the Generativ...
Gespeichert in:
Veröffentlicht in: | Journal of intelligent & fuzzy systems 2024-02, Vol.46 (2), p.5377-5389 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In order to address the problem of decreased accuracy in vehicle object detection models when facing low-light conditions in nighttime environments, this paper proposes a method to enhance the accuracy and precision of object detection by using the image translation technology based on the Generative Adversarial Network (GAN) in the field of computer vision, specifically the CycleGAN, from the perspective of improving the training set of object detection models. This is achieved by transforming the existing well-established daytime vehicle dataset into a nighttime vehicle dataset. The proposed method adopts a comparative experimental approach to obtain translation models with different degrees of fitting by changing the training set capacity, and selects the optimal model based on the evaluation of the effect. The translated dataset is then used to train the YOLO-v5-based object detection model, and the quality of the nighttime dataset is evaluated through the evaluation of annotation confidence and effectiveness. The research results indicate that utilizing the translated nighttime vehicle dataset for training the object detection model can increase the area under the PR curve and the peak F1 score by 10.4% and 9% respectively. This approach improves the annotation accuracy and precision of vehicle object detection models in nighttime environments without requiring additional labeling of vehicles in monitoring videos. |
---|---|
ISSN: | 1064-1246 1875-8967 |
DOI: | 10.3233/JIFS-233899 |