Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

Modern machine learning systems use models trained on ever-growing corpora. Typically, metadata such as ownership, access control, or licensing information is ignored during training. Instead, to mitigate privacy risks, we rely on generic techniques such as dataset sanitization and differentially pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wutschitz, Lukas, Köpf, Boris, Paverd, Andrew, Rajmohan, Saravan, Salem, Ahmed, Tople, Shruti, Zanella-Béguelin, Santiago, Xia, Menglin, Rühle, Victor
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wutschitz, Lukas
Köpf, Boris
Paverd, Andrew
Rajmohan, Saravan
Salem, Ahmed
Tople, Shruti
Zanella-Béguelin, Santiago
Xia, Menglin
Rühle, Victor
description Modern machine learning systems use models trained on ever-growing corpora. Typically, metadata such as ownership, access control, or licensing information is ignored during training. Instead, to mitigate privacy risks, we rely on generic techniques such as dataset sanitization and differentially private model training, with inherent privacy/utility trade-offs that hurt model performance. Moreover, these techniques have limitations in scenarios where sensitive information is shared across multiple participants and fine-grained access control is required. By ignoring metadata, we therefore miss an opportunity to better address security, privacy, and confidentiality challenges. In this paper, we take an information flow control perspective to describe machine learning systems, which allows us to leverage metadata such as access control policies and define clear-cut privacy and confidentiality guarantees with interpretable information flows. Under this perspective, we contrast two different approaches to achieve user-level non-interference: 1) fine-tuning per-user models, and 2) retrieval augmented models that access user-specific datasets at inference time. We compare these two approaches to a trivially non-interfering zero-shot baseline using a public model and to a baseline that fine-tunes this model on the whole corpus. We evaluate trained models on two datasets of scientific articles and demonstrate that retrieval augmented architectures deliver the best utility, scalability, and flexibility while satisfying strict non-interference guarantees.
doi_str_mv 10.48550/arxiv.2311.15792
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_15792</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_15792</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-c5b3010c38bd79ac377966adf9c43019a74a53c0babb0e46228f53238f4b1f243</originalsourceid><addsrcrecordid>eNotj81KxDAYRbNxIaMP4Mq8QGt-m3YpxdGBioPMdihfMskYbJOSluq8vbW6unAOXDgI3VGSi1JK8gDp288545TmVKqKXaPju50-fPj04Yz3yc9gLtgH_ApmoRY3FlJYnR9st5ARuxR7DAHvgouph8nHgLdd_MJ1DFOKHd7bNA7WTH62N-jKQTfa2__doMP26VC_ZM3b865-bDIoFMuM1JxQYnipT6oCw5WqigJOrjJiERUoAZIbokFrYkXBWOkkZ7x0QlPHBN-g-7_bta8dku8hXdrfznbt5D_vO05i</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective</title><source>arXiv.org</source><creator>Wutschitz, Lukas ; Köpf, Boris ; Paverd, Andrew ; Rajmohan, Saravan ; Salem, Ahmed ; Tople, Shruti ; Zanella-Béguelin, Santiago ; Xia, Menglin ; Rühle, Victor</creator><creatorcontrib>Wutschitz, Lukas ; Köpf, Boris ; Paverd, Andrew ; Rajmohan, Saravan ; Salem, Ahmed ; Tople, Shruti ; Zanella-Béguelin, Santiago ; Xia, Menglin ; Rühle, Victor</creatorcontrib><description>Modern machine learning systems use models trained on ever-growing corpora. Typically, metadata such as ownership, access control, or licensing information is ignored during training. Instead, to mitigate privacy risks, we rely on generic techniques such as dataset sanitization and differentially private model training, with inherent privacy/utility trade-offs that hurt model performance. Moreover, these techniques have limitations in scenarios where sensitive information is shared across multiple participants and fine-grained access control is required. By ignoring metadata, we therefore miss an opportunity to better address security, privacy, and confidentiality challenges. In this paper, we take an information flow control perspective to describe machine learning systems, which allows us to leverage metadata such as access control policies and define clear-cut privacy and confidentiality guarantees with interpretable information flows. Under this perspective, we contrast two different approaches to achieve user-level non-interference: 1) fine-tuning per-user models, and 2) retrieval augmented models that access user-specific datasets at inference time. We compare these two approaches to a trivially non-interfering zero-shot baseline using a public model and to a baseline that fine-tunes this model on the whole corpus. We evaluate trained models on two datasets of scientific articles and demonstrate that retrieval augmented architectures deliver the best utility, scalability, and flexibility while satisfying strict non-interference guarantees.</description><identifier>DOI: 10.48550/arxiv.2311.15792</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2023-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.15792$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.15792$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wutschitz, Lukas</creatorcontrib><creatorcontrib>Köpf, Boris</creatorcontrib><creatorcontrib>Paverd, Andrew</creatorcontrib><creatorcontrib>Rajmohan, Saravan</creatorcontrib><creatorcontrib>Salem, Ahmed</creatorcontrib><creatorcontrib>Tople, Shruti</creatorcontrib><creatorcontrib>Zanella-Béguelin, Santiago</creatorcontrib><creatorcontrib>Xia, Menglin</creatorcontrib><creatorcontrib>Rühle, Victor</creatorcontrib><title>Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective</title><description>Modern machine learning systems use models trained on ever-growing corpora. Typically, metadata such as ownership, access control, or licensing information is ignored during training. Instead, to mitigate privacy risks, we rely on generic techniques such as dataset sanitization and differentially private model training, with inherent privacy/utility trade-offs that hurt model performance. Moreover, these techniques have limitations in scenarios where sensitive information is shared across multiple participants and fine-grained access control is required. By ignoring metadata, we therefore miss an opportunity to better address security, privacy, and confidentiality challenges. In this paper, we take an information flow control perspective to describe machine learning systems, which allows us to leverage metadata such as access control policies and define clear-cut privacy and confidentiality guarantees with interpretable information flows. Under this perspective, we contrast two different approaches to achieve user-level non-interference: 1) fine-tuning per-user models, and 2) retrieval augmented models that access user-specific datasets at inference time. We compare these two approaches to a trivially non-interfering zero-shot baseline using a public model and to a baseline that fine-tunes this model on the whole corpus. We evaluate trained models on two datasets of scientific articles and demonstrate that retrieval augmented architectures deliver the best utility, scalability, and flexibility while satisfying strict non-interference guarantees.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAYRbNxIaMP4Mq8QGt-m3YpxdGBioPMdihfMskYbJOSluq8vbW6unAOXDgI3VGSi1JK8gDp288545TmVKqKXaPju50-fPj04Yz3yc9gLtgH_ApmoRY3FlJYnR9st5ARuxR7DAHvgouph8nHgLdd_MJ1DFOKHd7bNA7WTH62N-jKQTfa2__doMP26VC_ZM3b865-bDIoFMuM1JxQYnipT6oCw5WqigJOrjJiERUoAZIbokFrYkXBWOkkZ7x0QlPHBN-g-7_bta8dku8hXdrfznbt5D_vO05i</recordid><startdate>20231127</startdate><enddate>20231127</enddate><creator>Wutschitz, Lukas</creator><creator>Köpf, Boris</creator><creator>Paverd, Andrew</creator><creator>Rajmohan, Saravan</creator><creator>Salem, Ahmed</creator><creator>Tople, Shruti</creator><creator>Zanella-Béguelin, Santiago</creator><creator>Xia, Menglin</creator><creator>Rühle, Victor</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231127</creationdate><title>Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective</title><author>Wutschitz, Lukas ; Köpf, Boris ; Paverd, Andrew ; Rajmohan, Saravan ; Salem, Ahmed ; Tople, Shruti ; Zanella-Béguelin, Santiago ; Xia, Menglin ; Rühle, Victor</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-c5b3010c38bd79ac377966adf9c43019a74a53c0babb0e46228f53238f4b1f243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wutschitz, Lukas</creatorcontrib><creatorcontrib>Köpf, Boris</creatorcontrib><creatorcontrib>Paverd, Andrew</creatorcontrib><creatorcontrib>Rajmohan, Saravan</creatorcontrib><creatorcontrib>Salem, Ahmed</creatorcontrib><creatorcontrib>Tople, Shruti</creatorcontrib><creatorcontrib>Zanella-Béguelin, Santiago</creatorcontrib><creatorcontrib>Xia, Menglin</creatorcontrib><creatorcontrib>Rühle, Victor</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wutschitz, Lukas</au><au>Köpf, Boris</au><au>Paverd, Andrew</au><au>Rajmohan, Saravan</au><au>Salem, Ahmed</au><au>Tople, Shruti</au><au>Zanella-Béguelin, Santiago</au><au>Xia, Menglin</au><au>Rühle, Victor</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective</atitle><date>2023-11-27</date><risdate>2023</risdate><abstract>Modern machine learning systems use models trained on ever-growing corpora. Typically, metadata such as ownership, access control, or licensing information is ignored during training. Instead, to mitigate privacy risks, we rely on generic techniques such as dataset sanitization and differentially private model training, with inherent privacy/utility trade-offs that hurt model performance. Moreover, these techniques have limitations in scenarios where sensitive information is shared across multiple participants and fine-grained access control is required. By ignoring metadata, we therefore miss an opportunity to better address security, privacy, and confidentiality challenges. In this paper, we take an information flow control perspective to describe machine learning systems, which allows us to leverage metadata such as access control policies and define clear-cut privacy and confidentiality guarantees with interpretable information flows. Under this perspective, we contrast two different approaches to achieve user-level non-interference: 1) fine-tuning per-user models, and 2) retrieval augmented models that access user-specific datasets at inference time. We compare these two approaches to a trivially non-interfering zero-shot baseline using a public model and to a baseline that fine-tunes this model on the whole corpus. We evaluate trained models on two datasets of scientific articles and demonstrate that retrieval augmented architectures deliver the best utility, scalability, and flexibility while satisfying strict non-interference guarantees.</abstract><doi>10.48550/arxiv.2311.15792</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2311.15792
ispartof
issn
language eng
recordid cdi_arxiv_primary_2311_15792
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T07%3A16%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rethinking%20Privacy%20in%20Machine%20Learning%20Pipelines%20from%20an%20Information%20Flow%20Control%20Perspective&rft.au=Wutschitz,%20Lukas&rft.date=2023-11-27&rft_id=info:doi/10.48550/arxiv.2311.15792&rft_dat=%3Carxiv_GOX%3E2311_15792%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true