Transpose Attack: Stealing Datasets with Bidirectional Training
Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In add...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks are normally executed in the forward direction. However,
in this work, we identify a vulnerability that enables models to be trained in
both directions and on different tasks. Adversaries can exploit this capability
to hide rogue models within seemingly legitimate models. In addition, in this
work we show that neural networks can be taught to systematically memorize and
retrieve specific samples from datasets. Together, these findings expose a
novel method in which adversaries can exfiltrate datasets from protected
learning environments under the guise of legitimate models. We focus on the
data exfiltration attack and show that modern architectures can be used to
secretly exfiltrate tens of thousands of samples with high fidelity, high
enough to compromise data privacy and even train new models. Moreover, to
mitigate this threat we propose a novel approach for detecting infected models. |
---|---|
DOI: | 10.48550/arxiv.2311.07389 |