Privacy Preserving Machine Learning for Behavioral Authentication Systems
A behavioral authentication (BA) system uses the behavioral characteristics of users to verify their identity claims. A BA verification algorithm can be constructed by training a neural network (NN) classifier on users' profiles. The trained NN model classifies the presented verification data,...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A behavioral authentication (BA) system uses the behavioral characteristics
of users to verify their identity claims. A BA verification algorithm can be
constructed by training a neural network (NN) classifier on users' profiles.
The trained NN model classifies the presented verification data, and if the
classification matches the claimed identity, the verification algorithm accepts
the claim. This classification-based approach removes the need to maintain a
profile database. However, similar to other NN architectures, the NN classifier
of the BA system is vulnerable to privacy attacks. To protect the privacy of
training and test data used in an NN different techniques are widely used. In
this paper, our focus is on a non-crypto-based approach, and we used random
projection (RP) to ensure data privacy in an NN model. RP is a
distance-preserving transformation based on a random matrix. Before sharing the
profiles with the verifier, users will transform their profiles by RP and keep
their matrices secret. To reduce the computation load in RP, we use sparse
random projection, which is very effective for low-compute devices. Along with
correctness and security properties, our system can ensure the changeability
property of the BA system. We also introduce an ML-based privacy attack, and
our proposed system is robust against this and other privacy and security
attacks. We implemented our approach on three existing behavioral BA systems
and achieved a below 2.0% FRR and a below 1.0% FAR rate. Moreover, the machine
learning-based privacy attacker can only recover below 3.0% to 12.0% of
features from a portion of the projected profiles. However, these recovered
features are not sufficient to know details about the users' behavioral pattern
or to be used in a subsequent attack. Our approach is general and can be used
in other NN-based BA systems as well as in traditional biometric systems. |
---|---|
DOI: | 10.48550/arxiv.2309.13046 |