DAWN: Dynamic Adversarial Watermarking of Neural Networks
Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute intellectual property (IP) and business value for their owners. Embedding digital watermarks during model training allows a model owner to later...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training machine learning (ML) models is expensive in terms of computational
power, amounts of labeled data and human expertise. Thus, ML models constitute
intellectual property (IP) and business value for their owners. Embedding
digital watermarks during model training allows a model owner to later identify
their models in case of theft or misuse. However, model functionality can also
be stolen via model extraction, where an adversary trains a surrogate model
using results returned from a prediction API of the original model. Recent work
has shown that model extraction is a realistic threat. Existing watermarking
schemes are ineffective against IP theft via model extraction since it is the
adversary who trains the surrogate model. In this paper, we introduce DAWN
(Dynamic Adversarial Watermarking of Neural Networks), the first approach to
use watermarking to deter model extraction IP theft. Unlike prior watermarking
schemes, DAWN does not impose changes to the training process but it operates
at the prediction API of the protected model, by dynamically changing the
responses for a small subset of queries (e.g., 1- 2^{-64}$), incurring negligible loss of
prediction accuracy (0.03-0.5%). |
---|---|
DOI: | 10.48550/arxiv.1906.00830 |