An Improved Lane-Keeping Controller for Autonomous Vehicles Leveraging an Integrated CNN-LSTM Approach
Representing the task of navigating a car through traffic using traditional algorithms is a complex endeavor that presents significant challenges. To overcome this, researchers have started training artificial neural networks using data from front-facing cameras, combined with corresponding steering...
Gespeichert in:
Veröffentlicht in: | International journal of advanced computer science & applications 2023, Vol.14 (7) |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Representing the task of navigating a car through traffic using traditional algorithms is a complex endeavor that presents significant challenges. To overcome this, researchers have started training artificial neural networks using data from front-facing cameras, combined with corresponding steering angles. However, many current solutions focus solely on the visual information from the camera frames, overlooking the important temporal relationships between these frames. This paper introduces a novel approach to end-to-end steering control by combining a VGG16 convolutional neural network (CNN) architecture with Long Short-Term Memory (LSTM). This integrated model enables the learning of both the temporal dependencies within a sequence of images and the dynamics of the control process. Furthermore, we will present and evaluate the estimated accuracy of the proposed approach for steering angle prediction, comparing it with various CNN models including the Nvidia classic model, Nvidia model, and MobilenetV2 model when integrated with LSTM. The proposed method demonstrates superior accuracy compared to other approaches, achieving the lowest loss function. To evaluate its performance, we recorded a video and saved the corresponding steering angle results based on human perception from the robot operating system (ROS2). The videos are then split into image sequences to be smoothly fed into the processing model for training. |
---|---|
ISSN: | 2158-107X 2156-5570 |
DOI: | 10.14569/IJACSA.2023.0140723 |