Next Point-of-Interest Recommendation with Adaptive Graph Contrastive Learning
Next point-of-interest (POI) recommendation predicts user's next movement and facilitates location-based applications such as destination suggestion and travel planning. State-of-the-art (SOTA) methods learn an adaptive graph from user trajectories and compute POI representations using graph ne...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on knowledge and data engineering 2024-11, p.1-14 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Next point-of-interest (POI) recommendation predicts user's next movement and facilitates location-based applications such as destination suggestion and travel planning. State-of-the-art (SOTA) methods learn an adaptive graph from user trajectories and compute POI representations using graph neural networks (GNNs). However, a single graph cannot capture the diverse dependencies among the POIs (e.g., geographical proximity and transition frequency). To tackle this limitation, we propose the A daptive G raph C ontrastive L earning ( AGCL ) framework. AGCL constructs multiple adaptive graphs, each modeling a kind of POI dependency and producing one POI representation; and the POI representations from different graphs are merged into a multi-facet representation that encodes comprehensive information. To train the POI representations, we tailor a graph-based contrastive learning , which encourages the representations of similar POIs to align and dissimilar POIs to differentiate. Moreover, to learn the sequential regularities of user trajectories, we design an attention mechanism to integrate spatial-temporal information into the POI representations. An explicit spatial-temporal bias is also employed to adjust the predictions for enhanced accuracy. We compare AGCL with 10 state-of-the-art baselines on 3 datasets. The results show that AGCL outperforms all baselines and achieves an improvement of 10.14% over the best performing baseline in average accuracy. |
---|---|
ISSN: | 1041-4347 |
DOI: | 10.1109/TKDE.2024.3509480 |