Label Inference Attack against Split Learning under Regression Setting
As a crucial building block in vertical Federated Learning (vFL), Split Learning (SL) has demonstrated its practice in the two-party model training collaboration, where one party holds the features of data samples and another party holds the corresponding labels. Such method is claimed to be private...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As a crucial building block in vertical Federated Learning (vFL), Split
Learning (SL) has demonstrated its practice in the two-party model training
collaboration, where one party holds the features of data samples and another
party holds the corresponding labels. Such method is claimed to be private
considering the shared information is only the embedding vectors and gradients
instead of private raw data and labels. However, some recent works have shown
that the private labels could be leaked by the gradients. These existing attack
only works under the classification setting where the private labels are
discrete. In this work, we step further to study the leakage in the scenario of
the regression model, where the private labels are continuous numbers (instead
of discrete labels in classification). This makes previous attacks harder to
infer the continuous labels due to the unbounded output range. To address the
limitation, we propose a novel learning-based attack that integrates gradient
information and extra learning regularization objectives in aspects of model
training properties, which can infer the labels under regression settings
effectively. The comprehensive experiments on various datasets and models have
demonstrated the effectiveness of our proposed attack. We hope our work can
pave the way for future analyses that make the vFL framework more secure. |
---|---|
DOI: | 10.48550/arxiv.2301.07284 |