Pre-Trained Semantic Embeddings for POI Categories Based on Multiple Contexts

The past decade has witnessed the increasingly created point-of-interest (POI) data, which are utilized to express the semantics of places. To understand the POI semantics, current methods usually embed POI categories into a latent space via certain trajectory sequential models, while neglecting the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2023-09, Vol.35 (9), p.1-12
Hauptverfasser: Bing, Junxiang, Chen, Meng, Yang, Min, Huang, Weiming, Gong, Yongshun, Nie, Liqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The past decade has witnessed the increasingly created point-of-interest (POI) data, which are utilized to express the semantics of places. To understand the POI semantics, current methods usually embed POI categories into a latent space via certain trajectory sequential models, while neglecting the underlining spatial information. It is noteworthy that the complex spatial relationships among POI categories contain substantial information that benefits meaningful semantic embeddings for various POI categories. Inspired by this, we present a unified POI Category Embedding Method (CatEM for short), which jointly encodes the sequential transitions and spatial relations of POI categories as well as the adaptive semantic neighbors of each POI category. The merits of CatEM lie in that: (1) it considers the pairwise spatial similarities between categories and represents categories with larger similarity values with adjacent embeddings in the latent space, and (2) it adaptively locates neighbor categories with similar semantics in the embedding space to improve the adaptivity of POI category embedding. The proposed pre-trained POI category embeddings are justified by three downstream tasks. Extensive experiments demonstrate the superiority of our proposed model, as compared to several cutting-edge baselines.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2022.3218851