Context-embedded hypergraph attention network and self-attention for session recommendation

Modeling user intention with limited evidence in short-term historical sequences is a major challenge in session recommendation. In this domain, research exploration extends from traditional methods to deep learning. However, most of them solely concentrate on the sequential dependence or pairwise r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2024-08, Vol.14 (1), p.19413-16, Article 19413
Hauptverfasser: Zhang, Zhigao, Zhang, Hongmei, Zhang, Zhifeng, Wang, Bin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Modeling user intention with limited evidence in short-term historical sequences is a major challenge in session recommendation. In this domain, research exploration extends from traditional methods to deep learning. However, most of them solely concentrate on the sequential dependence or pairwise relations within the session, disregarding the inherent consistency among items. Additionally, there is a lack of research on context adaptation in session intention learning. To this end, we propose a novel session-based model named C-HAN, which consists of two parallel modules: the context-embedded hypergraph attention network and self-attention. These modules are designed to capture the inherent consistency and sequential dependencies between items. In the hypergraph attention network module, the different types of interaction contexts are introduced to enhance the model’s contextual awareness. Finally, the soft-attention mechanism efficiently integrates the two types of information, collaboratively constructing the representation of the session. Experimental validation on three real-world datasets demonstrates the superior performance of C-HAN compared to state-of-the-art methods. The results show that C-HAN achieves an average improvement of 6.55%, 5.91%, and 6.17% over the runner-up baseline method on Precision @ K , Recall @ K , and MRR evaluation metrics, respectively.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-66349-7