Fast and Scalable Ridesharing Search

In the next few decades, it is estimated that a quarter of all trips worldwide will be served by shared mobility supported in part by lower carbon footprint compared to private mobility. In particular, on-demand ridesharing is appealing due to its convenience, matching passengers needing rides to ve...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2024-11, Vol.36 (11), p.6159-6170
Hauptverfasser: Pan, James Jie, Li, Guoliang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the next few decades, it is estimated that a quarter of all trips worldwide will be served by shared mobility supported in part by lower carbon footprint compared to private mobility. In particular, on-demand ridesharing is appealing due to its convenience, matching passengers needing rides to vehicles in real time while optimizing the matching. While this matching problem is computationally challenging, the state-of-art greedy search algorithm assigns passengers one at a time to the locally best vehicle and has been shown to perform well in practice. However, in order to scale the algorithm, how to parallelize searches for multiple requests remains challenging due to contention for vehicle tours. Moreover, the request latency may still be too high for on-demand requests. In this paper, we give several techniques to speed up and scale out ridesharing search. To deal with data contention while scaling out greedy search, we introduce a "map-release" and ticketing system that sacrifices read-write consistency to achieve high concurrency, even under high contention, and while avoiding expensive aborts incurred by optimistic approaches. To address high request latency, we give a caching technique to speed up the tour expansion subroutine of greedy search, and we also give a pruning technique to reduce the tour candidates even further compared to existing techniques. Together, these techniques deliver around 7x the throughput and order of magnitude lower latency on a real instance compared to the "embarassingly parallel" parallelized map approach and with better scalability.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3418433