Synthetic Dataset Generation Using Photo-Realistic Simulation with Varied Time and Weather Axes

To facilitate the integration of autonomous unmanned air vehicles (UAVs) in day-to-day life, it is imperative that safe navigation can be demonstrated in all relevant scenarios. For UAVs using a navigational protocol driven by artificial neural networks, training and testing data from multiple envir...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-04, Vol.13 (8), p.1516
Hauptverfasser: Lee, Thomas, Mckeever, Susan, Courtney, Jane
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To facilitate the integration of autonomous unmanned air vehicles (UAVs) in day-to-day life, it is imperative that safe navigation can be demonstrated in all relevant scenarios. For UAVs using a navigational protocol driven by artificial neural networks, training and testing data from multiple environmental contexts are needed to ensure that bias is minimised. The reduction in predictive capacity when faced with unfamiliar data is a common weak point in trained networks, which worsens the further the input data deviates from the training data. However, training for multiple environmental variables dramatically increases the man-hours required for data collection and validation. In this work, a potential solution to this data availability issue is presented through the generation and evaluation of photo-realistic image datasets from a simulation of 3D-scanned physical spaces which are theoretically linked in a digital twin (DT) configuration. This simulation is then used to generate environmentally varied iterations of the target object in that physical space by two contextual variables (weather and daylight). This results in an expanded dataset of bicycles that contains weather and time-varied components of the same images which are then evaluated using a generic build of the YoloV3 object detection network; the response is then compared to two real image (night and day) datasets as a baseline. The results reveal that the network response remained consistent across the temporal axis, maintaining a measured domain shift of approximately 23% between the two baselines.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics13081516