Dynamic Pricing and Fleet Management for Electric Autonomous Mobility on Demand Systems

The proliferation of ride sharing systems is a major drive in the advancement of autonomous and electric vehicle technologies. This paper considers the joint routing, battery charging, and pricing problem faced by a profit-maximizing transportation service provider that operates a fleet of autonomou...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Turan, Berkay, Pedarsani, Ramtin, Alizadeh, Mahnoosh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The proliferation of ride sharing systems is a major drive in the advancement of autonomous and electric vehicle technologies. This paper considers the joint routing, battery charging, and pricing problem faced by a profit-maximizing transportation service provider that operates a fleet of autonomous electric vehicles. We first establish the static planning problem by considering time-invariant system parameters and determine the optimal static policy. While the static policy provides stability of customer queues waiting for rides even if consider the system dynamics, we see that it is inefficient to utilize a static policy as it can lead to long wait times for customers and low profits. To accommodate for the stochastic nature of trip demands, renewable energy availability, and electricity prices and to further optimally manage the autonomous fleet given the need to generate integer allocations, a real-time policy is required. The optimal real-time policy that executes actions based on full state information of the system is the solution of a complex dynamic program. However, we argue that it is intractable to exactly solve for the optimal policy using exact dynamic programming methods and therefore apply deep reinforcement learning to develop a near-optimal control policy. The two case studies we conducted in Manhattan and San Francisco demonstrate the efficacy of our real-time policy in terms of network stability and profits, while keeping the queue lengths up to 200 times less than the static policy.
DOI:10.48550/arxiv.1909.06962