LightPath: Lightweight and Scalable Path Representation Learning
Movement paths are used widely in intelligent transportation and smart city applications. To serve such applications, path representation learning aims to provide compact representations of paths that enable efficient and accurate operations when used for different downstream tasks such as path rank...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Movement paths are used widely in intelligent transportation and smart city
applications. To serve such applications, path representation learning aims to
provide compact representations of paths that enable efficient and accurate
operations when used for different downstream tasks such as path ranking and
travel cost estimation. In many cases, it is attractive that the path
representation learning is lightweight and scalable; in resource-limited
environments and under green computing limitations, it is essential. Yet,
existing path representation learning studies focus on accuracy and pay at most
secondary attention to resource consumption and scalability.
We propose a lightweight and scalable path representation learning framework,
termed LightPath, that aims to reduce resource consumption and achieve
scalability without affecting accuracy, thus enabling broader applicability.
More specifically, we first propose a sparse auto-encoder that ensures that the
framework achieves good scalability with respect to path length. Next, we
propose a relational reasoning framework to enable faster training of more
robust sparse path encoders. We also propose global-local knowledge
distillation to further reduce the size and improve the performance of sparse
path encoders. Finally, we report extensive experiments on two real-world
datasets to offer insight into the efficiency, scalability, and effectiveness
of the proposed framework. |
---|---|
DOI: | 10.48550/arxiv.2307.10171 |