Learning Domain-Independent Representations via Shared Weight Auto-Encoder for Transfer Learning in Recommender Systems

Despite many recent advances, state-of-the-art recommender systems still struggle to achieve good performance with sparse datasets. To address the sparsity issue, transfer learning techniques have been investigated for recommender systems, but they tend to impose strict constraints on the content an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.71961-71972
Hauptverfasser: Wang, Qinqin, Oreilly-Morgan, Diarmuid, Tragos, Elias Z., Hurley, Neil, Smyth, Barry, Lawlor, Aonghus, Dong, Ruihai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Despite many recent advances, state-of-the-art recommender systems still struggle to achieve good performance with sparse datasets. To address the sparsity issue, transfer learning techniques have been investigated for recommender systems, but they tend to impose strict constraints on the content and structure of the data in the source and target domains. For transfer learning methods to work well, there should normally be homogeneity between source and target domains, or a high degree of overlap between the source and target items. In this paper we propose a novel transfer learning framework for mitigating the effects of sparsity and insufficient data. Our method requires neither homogeneity nor overlap between the source and target domains. We describe and evaluate a shared parameter auto-encoder to jointly learn representations of user/item aspects in two domains, applying Maximum Mean Discrepancy (MMD) loss during training to ensure that the source and target representations are similar in the distribution space. The approach is evaluated using a number of benchmark datasets to demonstrate improved recommendation performance when learned representations are used in collaborative filtering. The code used for this work is available on github.com.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3188709