FELLAS: Enhancing Federated Sequential Recommendation with LLM as External Services
Federated sequential recommendation (FedSeqRec) has gained growing attention due to its ability to protect user privacy. Unfortunately, the performance of FedSeqRec is still unsatisfactory because the models used in FedSeqRec have to be lightweight to accommodate communication bandwidth and clients&...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated sequential recommendation (FedSeqRec) has gained growing attention
due to its ability to protect user privacy. Unfortunately, the performance of
FedSeqRec is still unsatisfactory because the models used in FedSeqRec have to
be lightweight to accommodate communication bandwidth and clients' on-device
computational resource constraints. Recently, large language models (LLMs) have
exhibited strong transferable and generalized language understanding abilities
and therefore, in the NLP area, many downstream tasks now utilize LLMs as a
service to achieve superior performance without constructing complex models.
Inspired by this successful practice, we propose a generic FedSeqRec framework,
FELLAS, which aims to enhance FedSeqRec by utilizing LLMs as an external
service. Specifically, FELLAS employs an LLM server to provide both item-level
and sequence-level representation assistance. The item-level representation
service is queried by the central server to enrich the original ID-based item
embedding with textual information, while the sequence-level representation
service is accessed by each client. However, invoking the sequence-level
representation service requires clients to send sequences to the external LLM
server. To safeguard privacy, we implement dx-privacy satisfied sequence
perturbation, which protects clients' sensitive data with guarantees.
Additionally, a contrastive learning-based method is designed to transfer
knowledge from the noisy sequence representation to clients' sequential
recommendation models. Furthermore, to empirically validate the privacy
protection capability of FELLAS, we propose two interacted item inference
attacks. Extensive experiments conducted on three datasets with two widely used
sequential recommendation models demonstrate the effectiveness and
privacy-preserving capability of FELLAS. |
---|---|
DOI: | 10.48550/arxiv.2410.04927 |