MTL-Leak: Privacy Risk Assessment in Multi-Task Learning

Multi-task learning (MTL) supports simultaneous training over multiple related tasks and learns the shared representation. While improving the generalization ability of training on a single task, MTL has higher privacy risk than traditional single-task learning because more sensitive information is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on dependable and secure computing 2024-01, Vol.21 (1), p.204-215
Hauptverfasser: Yan, Hongyang, Yan, Anli, Hu, Li, Liang, Jiaming, Hu, Haibo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-task learning (MTL) supports simultaneous training over multiple related tasks and learns the shared representation. While improving the generalization ability of training on a single task, MTL has higher privacy risk than traditional single-task learning because more sensitive information is extracted and learned in a correlated manner. Unfortunately, very few works have attempted to address the privacy risks posed by MTL. In this article, we first investigate such risk by designing model extraction attack (MEA) and membership inference attack (MIA) in MTL. Then we evaluate the privacy risks on six MTL model architectures and two popular MTL datasets, whose results show that both the number of tasks and the complexity of training data play an important role in the attack performance. Our investigation shows that MTL is more vulnerable than traditional single-task learning under both attacks.
ISSN:1545-5971
1941-0018
DOI:10.1109/TDSC.2023.3247869