Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems
Global financial crime activity is driving demand for machine learning solutions in fraud prevention. However, prevention systems are commonly serviced to financial institutions in isolation, and few provisions exist for data sharing due to fears of unintentional leaks and adversarial attacks. Colla...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Global financial crime activity is driving demand for machine learning
solutions in fraud prevention. However, prevention systems are commonly
serviced to financial institutions in isolation, and few provisions exist for
data sharing due to fears of unintentional leaks and adversarial attacks.
Collaborative learning advances in finance are rare, and it is hard to find
real-world insights derived from privacy-preserving data processing systems. In
this paper, we present a collaborative deep learning framework for fraud
prevention, designed from a privacy standpoint, and awarded at the recent PETs
Prize Challenges. We leverage latent embedded representations of varied-length
transaction sequences, along with local differential privacy, in order to
construct a data release mechanism which can securely inform externally hosted
fraud and anomaly detection models. We assess our contribution on two
distributed data sets donated by large payment networks, and demonstrate
robustness to popular inference-time attacks, along with utility-privacy
trade-offs analogous to published work in alternative application domains. |
---|---|
DOI: | 10.48550/arxiv.2401.02450 |