Taking Away Both Model and Data: Remember Training Data by Parameter Combinations
Machine Learning (ML) model hatcheries have emerged to help ML model producers. The only thing that the ML model producer needs to do is upload the untrained ML model to the hatchery with a specific task and deploy the returned trained ML model into real-world applications. Although the local privat...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on emerging topics in computational intelligence 2022-12, Vol.6 (6), p.1427-1437 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine Learning (ML) model hatcheries have emerged to help ML model producers. The only thing that the ML model producer needs to do is upload the untrained ML model to the hatchery with a specific task and deploy the returned trained ML model into real-world applications. Although the local private data of the hatchery are not directly accessed by the ML model producer, some backdoor attacks can still steal the private data. These attacks add malicious backdoor codes into the untrained benign ML model and recover the private data in some specific operations after training. However, existing attacks more or less have some disadvantages, such as the limited quality of the stolen private data, seriously affecting the original model performance, and being easy to defend. To address these disadvantages, we propose a novel efficient white-box backdoor attack method called Parameter Combination Encoding Attack (PCEA), which leverages the linear combinations of parameters to remember the private data during training. We evaluate the performance of the proposed method on stolen image quality, testing accuracy, and sensitivity. The experimental results show that PCEA has a much higher quality of the stolen data and robustness while keeping the testing accuracy. |
---|---|
ISSN: | 2471-285X 2471-285X |
DOI: | 10.1109/TETCI.2022.3182415 |