Operationalizing Contextual Integrity in Privacy-Conscious Assistants

Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants shar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ghalebikesabi, Sahra, Bagdasaryan, Eugene, Yi, Ren, Yona, Itay, Shumailov, Ilia, Pappu, Aneesh, Shi, Chongyang, Weidinger, Laura, Stanforth, Robert, Berrada, Leonard, Kohli, Pushmeet, Huang, Po-Sen, Balle, Borja
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ghalebikesabi, Sahra
Bagdasaryan, Eugene
Yi, Ren
Yona, Itay
Shumailov, Ilia
Pappu, Aneesh
Shi, Chongyang
Weidinger, Laura
Stanforth, Robert
Berrada, Leonard
Kohli, Pushmeet
Huang, Po-Sen
Balle, Borja
description Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of human annotations of common webform applications, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.
doi_str_mv 10.48550/arxiv.2408.02373
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_02373</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_02373</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_023733</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwMjY35mRw9S9ILUosyczPS8zJrMrMS1dwzs8rSa0oKU3MUfAEstKLMksqFTLzFAKKMssSkyt1gfLFyZn5pcUKjsXFmcUliXklxTwMrGmJOcWpvFCam0HezTXE2UMXbGF8QVFmbmJRZTzI4niwxcaEVQAAvTA5ug</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Operationalizing Contextual Integrity in Privacy-Conscious Assistants</title><source>arXiv.org</source><creator>Ghalebikesabi, Sahra ; Bagdasaryan, Eugene ; Yi, Ren ; Yona, Itay ; Shumailov, Ilia ; Pappu, Aneesh ; Shi, Chongyang ; Weidinger, Laura ; Stanforth, Robert ; Berrada, Leonard ; Kohli, Pushmeet ; Huang, Po-Sen ; Balle, Borja</creator><creatorcontrib>Ghalebikesabi, Sahra ; Bagdasaryan, Eugene ; Yi, Ren ; Yona, Itay ; Shumailov, Ilia ; Pappu, Aneesh ; Shi, Chongyang ; Weidinger, Laura ; Stanforth, Robert ; Berrada, Leonard ; Kohli, Pushmeet ; Huang, Po-Sen ; Balle, Borja</creatorcontrib><description>Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of human annotations of common webform applications, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.</description><identifier>DOI: 10.48550/arxiv.2408.02373</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.02373$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.02373$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ghalebikesabi, Sahra</creatorcontrib><creatorcontrib>Bagdasaryan, Eugene</creatorcontrib><creatorcontrib>Yi, Ren</creatorcontrib><creatorcontrib>Yona, Itay</creatorcontrib><creatorcontrib>Shumailov, Ilia</creatorcontrib><creatorcontrib>Pappu, Aneesh</creatorcontrib><creatorcontrib>Shi, Chongyang</creatorcontrib><creatorcontrib>Weidinger, Laura</creatorcontrib><creatorcontrib>Stanforth, Robert</creatorcontrib><creatorcontrib>Berrada, Leonard</creatorcontrib><creatorcontrib>Kohli, Pushmeet</creatorcontrib><creatorcontrib>Huang, Po-Sen</creatorcontrib><creatorcontrib>Balle, Borja</creatorcontrib><title>Operationalizing Contextual Integrity in Privacy-Conscious Assistants</title><description>Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of human annotations of common webform applications, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwMjY35mRw9S9ILUosyczPS8zJrMrMS1dwzs8rSa0oKU3MUfAEstKLMksqFTLzFAKKMssSkyt1gfLFyZn5pcUKjsXFmcUliXklxTwMrGmJOcWpvFCam0HezTXE2UMXbGF8QVFmbmJRZTzI4niwxcaEVQAAvTA5ug</recordid><startdate>20240805</startdate><enddate>20240805</enddate><creator>Ghalebikesabi, Sahra</creator><creator>Bagdasaryan, Eugene</creator><creator>Yi, Ren</creator><creator>Yona, Itay</creator><creator>Shumailov, Ilia</creator><creator>Pappu, Aneesh</creator><creator>Shi, Chongyang</creator><creator>Weidinger, Laura</creator><creator>Stanforth, Robert</creator><creator>Berrada, Leonard</creator><creator>Kohli, Pushmeet</creator><creator>Huang, Po-Sen</creator><creator>Balle, Borja</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240805</creationdate><title>Operationalizing Contextual Integrity in Privacy-Conscious Assistants</title><author>Ghalebikesabi, Sahra ; Bagdasaryan, Eugene ; Yi, Ren ; Yona, Itay ; Shumailov, Ilia ; Pappu, Aneesh ; Shi, Chongyang ; Weidinger, Laura ; Stanforth, Robert ; Berrada, Leonard ; Kohli, Pushmeet ; Huang, Po-Sen ; Balle, Borja</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_023733</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Ghalebikesabi, Sahra</creatorcontrib><creatorcontrib>Bagdasaryan, Eugene</creatorcontrib><creatorcontrib>Yi, Ren</creatorcontrib><creatorcontrib>Yona, Itay</creatorcontrib><creatorcontrib>Shumailov, Ilia</creatorcontrib><creatorcontrib>Pappu, Aneesh</creatorcontrib><creatorcontrib>Shi, Chongyang</creatorcontrib><creatorcontrib>Weidinger, Laura</creatorcontrib><creatorcontrib>Stanforth, Robert</creatorcontrib><creatorcontrib>Berrada, Leonard</creatorcontrib><creatorcontrib>Kohli, Pushmeet</creatorcontrib><creatorcontrib>Huang, Po-Sen</creatorcontrib><creatorcontrib>Balle, Borja</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ghalebikesabi, Sahra</au><au>Bagdasaryan, Eugene</au><au>Yi, Ren</au><au>Yona, Itay</au><au>Shumailov, Ilia</au><au>Pappu, Aneesh</au><au>Shi, Chongyang</au><au>Weidinger, Laura</au><au>Stanforth, Robert</au><au>Berrada, Leonard</au><au>Kohli, Pushmeet</au><au>Huang, Po-Sen</au><au>Balle, Borja</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Operationalizing Contextual Integrity in Privacy-Conscious Assistants</atitle><date>2024-08-05</date><risdate>2024</risdate><abstract>Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of human annotations of common webform applications, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.</abstract><doi>10.48550/arxiv.2408.02373</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.02373
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_02373
source arXiv.org
subjects Computer Science - Artificial Intelligence
title Operationalizing Contextual Integrity in Privacy-Conscious Assistants
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T23%3A19%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Operationalizing%20Contextual%20Integrity%20in%20Privacy-Conscious%20Assistants&rft.au=Ghalebikesabi,%20Sahra&rft.date=2024-08-05&rft_id=info:doi/10.48550/arxiv.2408.02373&rft_dat=%3Carxiv_GOX%3E2408_02373%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true