Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment

Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Feng, Qizhang, Siva Rajesh Kasa, Hyokun Yun, Choon Hui Teo, Sravan Babu Bodapati
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Feng, Qizhang
Siva Rajesh Kasa
Hyokun Yun
Choon Hui Teo
Sravan Babu Bodapati
description Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have made significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using human preference datasets to membership inference attacks (MIAs), highlighting the shortcomings of previous MIA approaches with respect to preference data. Our study has two main contributions: first, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (\uline{Pre}ference data \uline{MIA}); second, we provide empirical evidence that DPO models are more vulnerable to MIA compared to PPO models. Our findings highlight gaps in current privacy-preserving practices for LLM alignment.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3078197579</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3078197579</sourcerecordid><originalsourceid>FETCH-proquest_journals_30781975793</originalsourceid><addsrcrecordid>eNqNis0KgkAYAJcgSMp3-KCzoLuZ2k3KfsCgQ9BRNvk0TXdtd416-zzUvdPAzIyIRRnznHBB6YTYWteu69JlQH2fWeSSvDqpK1HCSVVPnr9hxzu9giO2V1T6VnVwEAUqFDlCbAzP7yDFMONPbrjhUEgFaXqEuKlK0aIwMzIueKPR_nJK5tvkvN47nZKPHrXJatkrMaSMuUHoRYEfROy_6wPjG0Cw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3078197579</pqid></control><display><type>article</type><title>Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment</title><source>Free E- Journals</source><creator>Feng, Qizhang ; Siva Rajesh Kasa ; Hyokun Yun ; Choon Hui Teo ; Sravan Babu Bodapati</creator><creatorcontrib>Feng, Qizhang ; Siva Rajesh Kasa ; Hyokun Yun ; Choon Hui Teo ; Sravan Babu Bodapati</creatorcontrib><description>Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have made significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using human preference datasets to membership inference attacks (MIAs), highlighting the shortcomings of previous MIA approaches with respect to preference data. Our study has two main contributions: first, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (\uline{Pre}ference data \uline{MIA}); second, we provide empirical evidence that DPO models are more vulnerable to MIA compared to PPO models. Our findings highlight gaps in current privacy-preserving practices for LLM alignment.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment ; Inference ; Large language models ; Optimization ; Privacy</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Feng, Qizhang</creatorcontrib><creatorcontrib>Siva Rajesh Kasa</creatorcontrib><creatorcontrib>Hyokun Yun</creatorcontrib><creatorcontrib>Choon Hui Teo</creatorcontrib><creatorcontrib>Sravan Babu Bodapati</creatorcontrib><title>Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment</title><title>arXiv.org</title><description>Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have made significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using human preference datasets to membership inference attacks (MIAs), highlighting the shortcomings of previous MIA approaches with respect to preference data. Our study has two main contributions: first, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (\uline{Pre}ference data \uline{MIA}); second, we provide empirical evidence that DPO models are more vulnerable to MIA compared to PPO models. Our findings highlight gaps in current privacy-preserving practices for LLM alignment.</description><subject>Alignment</subject><subject>Inference</subject><subject>Large language models</subject><subject>Optimization</subject><subject>Privacy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNis0KgkAYAJcgSMp3-KCzoLuZ2k3KfsCgQ9BRNvk0TXdtd416-zzUvdPAzIyIRRnznHBB6YTYWteu69JlQH2fWeSSvDqpK1HCSVVPnr9hxzu9giO2V1T6VnVwEAUqFDlCbAzP7yDFMONPbrjhUEgFaXqEuKlK0aIwMzIueKPR_nJK5tvkvN47nZKPHrXJatkrMaSMuUHoRYEfROy_6wPjG0Cw</recordid><startdate>20240708</startdate><enddate>20240708</enddate><creator>Feng, Qizhang</creator><creator>Siva Rajesh Kasa</creator><creator>Hyokun Yun</creator><creator>Choon Hui Teo</creator><creator>Sravan Babu Bodapati</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240708</creationdate><title>Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment</title><author>Feng, Qizhang ; Siva Rajesh Kasa ; Hyokun Yun ; Choon Hui Teo ; Sravan Babu Bodapati</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30781975793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Alignment</topic><topic>Inference</topic><topic>Large language models</topic><topic>Optimization</topic><topic>Privacy</topic><toplevel>online_resources</toplevel><creatorcontrib>Feng, Qizhang</creatorcontrib><creatorcontrib>Siva Rajesh Kasa</creatorcontrib><creatorcontrib>Hyokun Yun</creatorcontrib><creatorcontrib>Choon Hui Teo</creatorcontrib><creatorcontrib>Sravan Babu Bodapati</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Feng, Qizhang</au><au>Siva Rajesh Kasa</au><au>Hyokun Yun</au><au>Choon Hui Teo</au><au>Sravan Babu Bodapati</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment</atitle><jtitle>arXiv.org</jtitle><date>2024-07-08</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have made significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using human preference datasets to membership inference attacks (MIAs), highlighting the shortcomings of previous MIA approaches with respect to preference data. Our study has two main contributions: first, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (\uline{Pre}ference data \uline{MIA}); second, we provide empirical evidence that DPO models are more vulnerable to MIA compared to PPO models. Our findings highlight gaps in current privacy-preserving practices for LLM alignment.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_3078197579
source Free E- Journals
subjects Alignment
Inference
Large language models
Optimization
Privacy
title Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T11%3A13%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Exposing%20Privacy%20Gaps:%20Membership%20Inference%20Attack%20on%20Preference%20Data%20for%20LLM%20Alignment&rft.jtitle=arXiv.org&rft.au=Feng,%20Qizhang&rft.date=2024-07-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3078197579%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3078197579&rft_id=info:pmid/&rfr_iscdi=true