Comparative Study of End-to-end Deep Learning Methods for Self-driving Car

Self-driving car is one of the most amazing applications and most active research of artificial intelligence. It uses end-to-end deep learning models to take orientation and speed decisions, using mainly Convolutional Neural Networks for computer vision, plugged to a fully connected network to outpu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of intelligent systems and applications 2020-10, Vol.12 (5), p.15-27
Hauptverfasser: Youssef, Fenjiro, Houda, Benbrahim
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Self-driving car is one of the most amazing applications and most active research of artificial intelligence. It uses end-to-end deep learning models to take orientation and speed decisions, using mainly Convolutional Neural Networks for computer vision, plugged to a fully connected network to output control commands. In this paper, we introduce the Self-driving car domain and the CARLA simulation environment with a focus on the lane-keeping task, then we present the two main end-to-end models, used to solve this problematic, beginning by Deep imitation learning (IL) and specifically the Conditional Imitation Learning (COIL) algorithm, that learns through expert labeled demonstrations, trying to mimic their behaviors, and thereafter, describing Deep Reinforcement Learning (DRL), and precisely DQN and DDPG (respectively Deep Q learning and deep deterministic policy gradient), that uses the concepts of learning by trial and error, while adopting the Markovian decision processes (MDP), to get the best policy for the driver agent. In the last chapter, we compare the two algorithms IL and DRL based on a new approach, with metrics used in deep learning (Loss during training phase) and Self-driving car (the episode's duration before a crash and Average distance from the road center during the testing phase). The results of the training and testing on CARLA simulator reveals that the IL algorithm performs better than DRL algorithm when the agents are already trained on a given circuit, but DRL agents show better adaptability when they are on new roads.
ISSN:2074-904X
2074-9058
DOI:10.5815/ijisa.2020.05.02