MOTION TARGET MONITORING AND RECOGNITION IN VIDEO SURVEILLANCE USING CLOUD–EDGE–IOT AND MACHINE LEARNING TECHNIQUES
We are aware that autonomous vehicle handles camera and LiDAR data pipelines and uses the sensor pictures to provide an autonomous object identification solution. While current research yields reasonable results, it falls short of offering practical solutions. For example, lane markings and traffic...
Gespeichert in:
Veröffentlicht in: | Fractals (Singapore) 2024, Vol.32 (9n10) |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We are aware that autonomous vehicle handles camera and LiDAR data pipelines and uses the sensor pictures to provide an autonomous object identification solution. While current research yields reasonable results, it falls short of offering practical solutions. For example, lane markings and traffic signs may become obscured by accumulation on roads, making it unsafe for a self-driving car to navigate. Moreover, the car’s sensors may be severely hindered by intense rain, snow, fog, or dust storms, which could endanger human safety. So, this research introduced Multi-Sensor Fusion and Segmentation for Deep
Q
-Network (DQN)-based Multi-Object Tracking in Autonomous Vehicles. Improved Adaptive Extended Kalman Filter (IAEKF) for noise reduction, Normalized Gamma Transformation-based CLAHE (NGT-CLAHE) for contrast enhancement, and Improved Adaptive Weighted Mean Filter (IAWMF) for adaptive thresholding have been used. A novel multi-segmentation using several segmentation methods and degrees dependent on the orientation of images has been used. DenseNet (D Net)-based multi-image fusion provides faster processing speeds and increased efficiency. The grid map-based pathways and lanes are chosen using the Energy Valley Optimizer (EVO) technique. This method easily achieves flexibility, robustness, and scalability by simplifying the complex activities. Furthermore, the YOLOv7 model is used for classification and detection. Metrics like velocity, accuracy rate, success rate, success ratio, and mean-squared error are used to assess the proposed method. |
---|---|
ISSN: | 0218-348X 1793-6543 |
DOI: | 10.1142/S0218348X25400134 |