Tunable Soft Prompts are Messengers in Federated Learning
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources, alleviating privacy concerns that arise from directly sharing local data. However, the lack of model privacy protection in FL becomes an unneglectable challenge, e...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL) enables multiple participants to collaboratively
train machine learning models using decentralized data sources, alleviating
privacy concerns that arise from directly sharing local data. However, the lack
of model privacy protection in FL becomes an unneglectable challenge,
especially when people want to federally finetune models based on a proprietary
large language model. In this study, we propose a novel FL training approach
that accomplishes information exchange among participants via tunable soft
prompts. These soft prompts, updated and transmitted between the server and
clients, assume the role of the global model parameters and serve as messengers
to deliver useful knowledge from the local data and global model. As the global
model itself is not required to be shared and the local training is conducted
based on an auxiliary model with fewer parameters than the global model, the
proposed approach provides protection for the global model while reducing
communication and computation costs in FL. Extensive experiments show the
effectiveness of the proposed approach compared to several baselines. We have
released the source code at
\url{https://github.com/alibaba/FederatedScope/tree/fedsp/federatedscope/nlp/fedsp}. |
---|---|
DOI: | 10.48550/arxiv.2311.06805 |