Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized Neural Networks
We study protecting a user's data (images in this work) against a learner's unauthorized use in training neural networks. It is especially challenging when the user's data is only a tiny percentage of the learner's complete training set. We revisit the traditional watermarking un...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study protecting a user's data (images in this work) against a learner's
unauthorized use in training neural networks. It is especially challenging when
the user's data is only a tiny percentage of the learner's complete training
set. We revisit the traditional watermarking under modern deep learning
settings to tackle the challenge. We show that when a user watermarks images
using a specialized linear color transformation, a neural network classifier
will be imprinted with the signature so that a third-party arbitrator can
verify the potentially unauthorized usage of the user data by inferring the
watermark signature from the neural network. We also discuss what watermarking
properties and signature spaces make the arbitrator's verification convincing.
To our best knowledge, this work is the first to protect an individual user's
data ownership from unauthorized use in training neural networks. |
---|---|
DOI: | 10.48550/arxiv.2109.09023 |