Courier routing and assignment for food delivery service using reinforcement learning

•We model a real-world food delivery service using Markov decision process.•We compare Q-learning, DDQN, and rule-based policy.•Policies generated by Q-Learning and DDQN collect more reward than benchmark.•A policy generated by DDQN for the single courier outperforms all other algorithms. We conside...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & industrial engineering 2022-02, Vol.164, p.107871, Article 107871
Hauptverfasser: Bozanta, Aysun, Cevik, Mucahit, Kavaklioglu, Can, Kavuk, Eray M., Tosun, Ayse, Sonuc, Sibel B., Duranel, Alper, Basar, Ayse
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We model a real-world food delivery service using Markov decision process.•We compare Q-learning, DDQN, and rule-based policy.•Policies generated by Q-Learning and DDQN collect more reward than benchmark.•A policy generated by DDQN for the single courier outperforms all other algorithms. We consider a Markov decision process model mimicking a real-world food delivery service where the objective is to maximize the revenue derived from served requests given a limited number of couriers over a period of time. The model incorporates the courier location, order origin, and order destination. Each courier’s task is to pick-up an assigned order and deliver it to the requested destination. We apply three different approaches to solve this problem. In the first approach, we simplify the model to a one courier case and then solve the resulting model using Q-Learning. The resulting policy is used for each courier in the model with more than one courier based on the assumption that all couriers are identical. In the second approach, we use the same logic, however, the underlying one courier model is solved using Double Deep Q-Networks (DDQN). In the third approach, the extensive model is considered where a system state consists of the positions of all couriers and all orders in the system. We use DDQN to solve the extensive model. Policies generated by these approaches are compared against a benchmark rule-based policy. We observe that the resulting policy of training a single courier with Q-learning accumulates higher rewards than the reward collected by the rule-based policy. In addition, DDQN algorithm for a single courier outperforms both the Q-learning and the rule-based approaches, however, DDQN performance is noted to be highly dependent on the hyper-parameters of the algorithm.
ISSN:0360-8352
1879-0550
DOI:10.1016/j.cie.2021.107871