Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems

Global financial crime activity is driving demand for machine learning solutions in fraud prevention. However, prevention systems are commonly serviced to financial institutions in isolation, and few provisions exist for data sharing due to fears of unintentional leaks and adversarial attacks. Colla...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Perez, Iker, Wong, Jason, Skalski, Piotr, Burrell, Stuart, Mortier, Richard, McAuley, Derek, Sutton, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Perez, Iker
Wong, Jason
Skalski, Piotr
Burrell, Stuart
Mortier, Richard
McAuley, Derek
Sutton, David
description Global financial crime activity is driving demand for machine learning solutions in fraud prevention. However, prevention systems are commonly serviced to financial institutions in isolation, and few provisions exist for data sharing due to fears of unintentional leaks and adversarial attacks. Collaborative learning advances in finance are rare, and it is hard to find real-world insights derived from privacy-preserving data processing systems. In this paper, we present a collaborative deep learning framework for fraud prevention, designed from a privacy standpoint, and awarded at the recent PETs Prize Challenges. We leverage latent embedded representations of varied-length transaction sequences, along with local differential privacy, in order to construct a data release mechanism which can securely inform externally hosted fraud and anomaly detection models. We assess our contribution on two distributed data sets donated by large payment networks, and demonstrate robustness to popular inference-time attacks, along with utility-privacy trade-offs analogous to published work in alternative application domains.
doi_str_mv 10.48550/arxiv.2401.02450
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_02450</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_02450</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-b231990f56ea12dc9bb0b9c6d719a79e432bb4658ded96a14b7630d17135df0a3</originalsourceid><addsrcrecordid>eNotj81Kw0AUhWfjQqoP4Mp5gcSZzE86S6mtCikKFroMd3JvykB-ZJIG8_a20dXhwHcOfIw9SJHqtTHiCeJPmNJMC5mKTBtxy45FX0HTzPwl1DVF6saw1M8YJhiJb1tPiKE78X2P1Aw8dBd0GGPw55GQ7yKc8ULTdJ32Hf-ah5Ha4Y7d1NAMdP-fK3bYbQ-bt6T4eH3fPBcJ2FwkPlPSOVEbSyAzrJz3wrvKYi4d5I60yrzX1qyR0FmQ2udWCZS5VAZrAWrFHv9uF7PyO4YW4lxeDcvFUP0C_ehMiQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems</title><source>arXiv.org</source><creator>Perez, Iker ; Wong, Jason ; Skalski, Piotr ; Burrell, Stuart ; Mortier, Richard ; McAuley, Derek ; Sutton, David</creator><creatorcontrib>Perez, Iker ; Wong, Jason ; Skalski, Piotr ; Burrell, Stuart ; Mortier, Richard ; McAuley, Derek ; Sutton, David</creatorcontrib><description>Global financial crime activity is driving demand for machine learning solutions in fraud prevention. However, prevention systems are commonly serviced to financial institutions in isolation, and few provisions exist for data sharing due to fears of unintentional leaks and adversarial attacks. Collaborative learning advances in finance are rare, and it is hard to find real-world insights derived from privacy-preserving data processing systems. In this paper, we present a collaborative deep learning framework for fraud prevention, designed from a privacy standpoint, and awarded at the recent PETs Prize Challenges. We leverage latent embedded representations of varied-length transaction sequences, along with local differential privacy, in order to construct a data release mechanism which can securely inform externally hosted fraud and anomaly detection models. We assess our contribution on two distributed data sets donated by large payment networks, and demonstrate robustness to popular inference-time attacks, along with utility-privacy trade-offs analogous to published work in alternative application domains.</description><identifier>DOI: 10.48550/arxiv.2401.02450</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2024-01</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.02450$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.02450$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Perez, Iker</creatorcontrib><creatorcontrib>Wong, Jason</creatorcontrib><creatorcontrib>Skalski, Piotr</creatorcontrib><creatorcontrib>Burrell, Stuart</creatorcontrib><creatorcontrib>Mortier, Richard</creatorcontrib><creatorcontrib>McAuley, Derek</creatorcontrib><creatorcontrib>Sutton, David</creatorcontrib><title>Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems</title><description>Global financial crime activity is driving demand for machine learning solutions in fraud prevention. However, prevention systems are commonly serviced to financial institutions in isolation, and few provisions exist for data sharing due to fears of unintentional leaks and adversarial attacks. Collaborative learning advances in finance are rare, and it is hard to find real-world insights derived from privacy-preserving data processing systems. In this paper, we present a collaborative deep learning framework for fraud prevention, designed from a privacy standpoint, and awarded at the recent PETs Prize Challenges. We leverage latent embedded representations of varied-length transaction sequences, along with local differential privacy, in order to construct a data release mechanism which can securely inform externally hosted fraud and anomaly detection models. We assess our contribution on two distributed data sets donated by large payment networks, and demonstrate robustness to popular inference-time attacks, along with utility-privacy trade-offs analogous to published work in alternative application domains.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81Kw0AUhWfjQqoP4Mp5gcSZzE86S6mtCikKFroMd3JvykB-ZJIG8_a20dXhwHcOfIw9SJHqtTHiCeJPmNJMC5mKTBtxy45FX0HTzPwl1DVF6saw1M8YJhiJb1tPiKE78X2P1Aw8dBd0GGPw55GQ7yKc8ULTdJ32Hf-ah5Ha4Y7d1NAMdP-fK3bYbQ-bt6T4eH3fPBcJ2FwkPlPSOVEbSyAzrJz3wrvKYi4d5I60yrzX1qyR0FmQ2udWCZS5VAZrAWrFHv9uF7PyO4YW4lxeDcvFUP0C_ehMiQ</recordid><startdate>20240103</startdate><enddate>20240103</enddate><creator>Perez, Iker</creator><creator>Wong, Jason</creator><creator>Skalski, Piotr</creator><creator>Burrell, Stuart</creator><creator>Mortier, Richard</creator><creator>McAuley, Derek</creator><creator>Sutton, David</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240103</creationdate><title>Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems</title><author>Perez, Iker ; Wong, Jason ; Skalski, Piotr ; Burrell, Stuart ; Mortier, Richard ; McAuley, Derek ; Sutton, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-b231990f56ea12dc9bb0b9c6d719a79e432bb4658ded96a14b7630d17135df0a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Perez, Iker</creatorcontrib><creatorcontrib>Wong, Jason</creatorcontrib><creatorcontrib>Skalski, Piotr</creatorcontrib><creatorcontrib>Burrell, Stuart</creatorcontrib><creatorcontrib>Mortier, Richard</creatorcontrib><creatorcontrib>McAuley, Derek</creatorcontrib><creatorcontrib>Sutton, David</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Perez, Iker</au><au>Wong, Jason</au><au>Skalski, Piotr</au><au>Burrell, Stuart</au><au>Mortier, Richard</au><au>McAuley, Derek</au><au>Sutton, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems</atitle><date>2024-01-03</date><risdate>2024</risdate><abstract>Global financial crime activity is driving demand for machine learning solutions in fraud prevention. However, prevention systems are commonly serviced to financial institutions in isolation, and few provisions exist for data sharing due to fears of unintentional leaks and adversarial attacks. Collaborative learning advances in finance are rare, and it is hard to find real-world insights derived from privacy-preserving data processing systems. In this paper, we present a collaborative deep learning framework for fraud prevention, designed from a privacy standpoint, and awarded at the recent PETs Prize Challenges. We leverage latent embedded representations of varied-length transaction sequences, along with local differential privacy, in order to construct a data release mechanism which can securely inform externally hosted fraud and anomaly detection models. We assess our contribution on two distributed data sets donated by large payment networks, and demonstrate robustness to popular inference-time attacks, along with utility-privacy trade-offs analogous to published work in alternative application domains.</abstract><doi>10.48550/arxiv.2401.02450</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2401.02450
ispartof
issn
language eng
recordid cdi_arxiv_primary_2401_02450
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Locally Differentially Private Embedding Models in Distributed Fraud Prevention Systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T01%3A20%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Locally%20Differentially%20Private%20Embedding%20Models%20in%20Distributed%20Fraud%20Prevention%20Systems&rft.au=Perez,%20Iker&rft.date=2024-01-03&rft_id=info:doi/10.48550/arxiv.2401.02450&rft_dat=%3Carxiv_GOX%3E2401_02450%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true