CORELOCKER: Neuron-level Usage Control
The growing complexity of deep neural network models in modern application domains necessitates a complex training process that involves extensive data, sophisticated design, and substantial computation. The trained model inherently encapsulates the intellectual property owned by the model developer...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The growing complexity of deep neural network models in modern application domains necessitates a complex training process that involves extensive data, sophisticated design, and substantial computation. The trained model inherently encapsulates the intellectual property owned by the model developer (or the model owner). Consequently, safeguarding the model from unauthorized use by entities who obtain access to the model (or the model controllers), i.e., preserving the fundamental rights and proprietary interests of the model owner, has become a critical necessity.In this work, we propose CORELOCKER, employing the strategic extraction of a small subset of significant weights from the neural network. This subset serves as the access key to unlock the model's complete capability. The extraction of the key can be customized to varying levels of utility that the model owner intends to release. Authorized users with the access key have full access to the model, while unauthorized users can have access to only part of its capability. We establish a formal foundation to underpin CORELOCKER, which provides crucial lower and upper bounds for the utility disparity between pre- and post-protected networks. We evaluate CORELOCKER using representative datasets such as Fashion-MNIST, CIFAR-10, and CIFAR-100, as well as real-world models including Vg-gNet, ResNet, and DenseNet. Our experimental results confirm its efficacy. We also demonstrate CORELOCKER's resilience against advanced model restoration attacks based on fine-tuning and pruning. |
---|---|
ISSN: | 2375-1207 |
DOI: | 10.1109/SP54263.2024.00233 |