Learning under p-tampering poisoning attacks

Recently, Mahloujifar and Mahmoody (Theory of Cryptography Conference’17) studied attacks against learning algorithms using a special case of Valiant’s malicious noise, called p -tampering, in which the adversary gets to change any training example with independent probability p but is limited to on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Annals of mathematics and artificial intelligence 2020-07, Vol.88 (7), p.759-792
Hauptverfasser: Mahloujifar, Saeed, Diochnos, Dimitrios I., Mahmoody, Mohammad
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 792
container_issue 7
container_start_page 759
container_title Annals of mathematics and artificial intelligence
container_volume 88
creator Mahloujifar, Saeed
Diochnos, Dimitrios I.
Mahmoody, Mohammad
description Recently, Mahloujifar and Mahmoody (Theory of Cryptography Conference’17) studied attacks against learning algorithms using a special case of Valiant’s malicious noise, called p -tampering, in which the adversary gets to change any training example with independent probability p but is limited to only choose ‘adversarial’ examples with correct labels. They obtained p -tampering attacks that increase the error probability in the so called ‘targeted’ poisoning model in which the adversary’s goal is to increase the loss of the trained hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the expected value of any bounded real-output function through p -tampering. In this work, we present new biasing attacks for increasing the expected value of bounded real-valued functions. Our improved biasing attacks, directly imply improved p -tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis. We also study the possibility of PAC learning under p -tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary’s goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p -tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under ‘no-mistake’ adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such ‘bounded-budget’ tampering attackers is inspired by the notions of adaptive corruption in cryptography.
doi_str_mv 10.1007/s10472-019-09675-1
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2918202915</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A717494101</galeid><sourcerecordid>A717494101</sourcerecordid><originalsourceid>FETCH-LOGICAL-c337t-6df46dfd31e0c8d76b325c689129d2d206354357782c91c4b51f0ba4c07da9c23</originalsourceid><addsrcrecordid>eNp9kMtKAzEUhoMoWKsv4Krg1tST22SyLEWrUHCj65AmmZLauZhMF769mY5QBJGQCyf_l8uH0C2BOQGQD4kAlxQDURhUIQUmZ2hChGRYcgnneQ2EYso5u0RXKe0AcqwsJuh-7U1sQrOdHRrn46zDvak7H4dK14bUHvdM3xv7ka7RRWX2yd_8zFP0_vT4tnzG69fVy3KxxpYx2ePCVTx3x4gHWzpZbBgVtigVocpRR6FggjMhZUmtIpZvBKlgY7gF6YyylE3R3XhuF9vPg0-93rWH2OQrNVWkpJBHcUptzd7r0FRtH42tQ7J6IYnkiudP59T8j1RuztfBto2vQq7_AugI2NimFH2luxhqE780AT3I1qNsnWXro2w9QGyEUjeo8_H04n-obxp-fpw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918202915</pqid></control><display><type>article</type><title>Learning under p-tampering poisoning attacks</title><source>SpringerNature Journals</source><source>ProQuest Central UK/Ireland</source><source>ProQuest Central</source><creator>Mahloujifar, Saeed ; Diochnos, Dimitrios I. ; Mahmoody, Mohammad</creator><creatorcontrib>Mahloujifar, Saeed ; Diochnos, Dimitrios I. ; Mahmoody, Mohammad</creatorcontrib><description>Recently, Mahloujifar and Mahmoody (Theory of Cryptography Conference’17) studied attacks against learning algorithms using a special case of Valiant’s malicious noise, called p -tampering, in which the adversary gets to change any training example with independent probability p but is limited to only choose ‘adversarial’ examples with correct labels. They obtained p -tampering attacks that increase the error probability in the so called ‘targeted’ poisoning model in which the adversary’s goal is to increase the loss of the trained hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the expected value of any bounded real-output function through p -tampering. In this work, we present new biasing attacks for increasing the expected value of bounded real-valued functions. Our improved biasing attacks, directly imply improved p -tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis. We also study the possibility of PAC learning under p -tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary’s goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p -tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under ‘no-mistake’ adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such ‘bounded-budget’ tampering attackers is inspired by the notions of adaptive corruption in cryptography.</description><identifier>ISSN: 1012-2443</identifier><identifier>EISSN: 1573-7470</identifier><identifier>DOI: 10.1007/s10472-019-09675-1</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Algorithms ; Analysis ; Artificial Intelligence ; Complex Systems ; Computer Science ; Cryptography ; Data mining ; Error correction ; Expected values ; Hate crimes ; Hypotheses ; Machine learning ; Mathematical functions ; Mathematics ; Poisoning</subject><ispartof>Annals of mathematics and artificial intelligence, 2020-07, Vol.88 (7), p.759-792</ispartof><rights>Springer Nature Switzerland AG 2019</rights><rights>COPYRIGHT 2020 Springer</rights><rights>Springer Nature Switzerland AG 2019.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c337t-6df46dfd31e0c8d76b325c689129d2d206354357782c91c4b51f0ba4c07da9c23</cites><orcidid>0000-0002-6839-4697</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10472-019-09675-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918202915?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21388,27924,27925,33744,41488,42557,43805,51319,64385,64389,72469</link.rule.ids></links><search><creatorcontrib>Mahloujifar, Saeed</creatorcontrib><creatorcontrib>Diochnos, Dimitrios I.</creatorcontrib><creatorcontrib>Mahmoody, Mohammad</creatorcontrib><title>Learning under p-tampering poisoning attacks</title><title>Annals of mathematics and artificial intelligence</title><addtitle>Ann Math Artif Intell</addtitle><description>Recently, Mahloujifar and Mahmoody (Theory of Cryptography Conference’17) studied attacks against learning algorithms using a special case of Valiant’s malicious noise, called p -tampering, in which the adversary gets to change any training example with independent probability p but is limited to only choose ‘adversarial’ examples with correct labels. They obtained p -tampering attacks that increase the error probability in the so called ‘targeted’ poisoning model in which the adversary’s goal is to increase the loss of the trained hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the expected value of any bounded real-output function through p -tampering. In this work, we present new biasing attacks for increasing the expected value of bounded real-valued functions. Our improved biasing attacks, directly imply improved p -tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis. We also study the possibility of PAC learning under p -tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary’s goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p -tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under ‘no-mistake’ adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such ‘bounded-budget’ tampering attackers is inspired by the notions of adaptive corruption in cryptography.</description><subject>Algorithms</subject><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Complex Systems</subject><subject>Computer Science</subject><subject>Cryptography</subject><subject>Data mining</subject><subject>Error correction</subject><subject>Expected values</subject><subject>Hate crimes</subject><subject>Hypotheses</subject><subject>Machine learning</subject><subject>Mathematical functions</subject><subject>Mathematics</subject><subject>Poisoning</subject><issn>1012-2443</issn><issn>1573-7470</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kMtKAzEUhoMoWKsv4Krg1tST22SyLEWrUHCj65AmmZLauZhMF769mY5QBJGQCyf_l8uH0C2BOQGQD4kAlxQDURhUIQUmZ2hChGRYcgnneQ2EYso5u0RXKe0AcqwsJuh-7U1sQrOdHRrn46zDvak7H4dK14bUHvdM3xv7ka7RRWX2yd_8zFP0_vT4tnzG69fVy3KxxpYx2ePCVTx3x4gHWzpZbBgVtigVocpRR6FggjMhZUmtIpZvBKlgY7gF6YyylE3R3XhuF9vPg0-93rWH2OQrNVWkpJBHcUptzd7r0FRtH42tQ7J6IYnkiudP59T8j1RuztfBto2vQq7_AugI2NimFH2luxhqE780AT3I1qNsnWXro2w9QGyEUjeo8_H04n-obxp-fpw</recordid><startdate>20200701</startdate><enddate>20200701</enddate><creator>Mahloujifar, Saeed</creator><creator>Diochnos, Dimitrios I.</creator><creator>Mahmoody, Mohammad</creator><general>Springer International Publishing</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0002-6839-4697</orcidid></search><sort><creationdate>20200701</creationdate><title>Learning under p-tampering poisoning attacks</title><author>Mahloujifar, Saeed ; Diochnos, Dimitrios I. ; Mahmoody, Mohammad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c337t-6df46dfd31e0c8d76b325c689129d2d206354357782c91c4b51f0ba4c07da9c23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Complex Systems</topic><topic>Computer Science</topic><topic>Cryptography</topic><topic>Data mining</topic><topic>Error correction</topic><topic>Expected values</topic><topic>Hate crimes</topic><topic>Hypotheses</topic><topic>Machine learning</topic><topic>Mathematical functions</topic><topic>Mathematics</topic><topic>Poisoning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mahloujifar, Saeed</creatorcontrib><creatorcontrib>Diochnos, Dimitrios I.</creatorcontrib><creatorcontrib>Mahmoody, Mohammad</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><jtitle>Annals of mathematics and artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mahloujifar, Saeed</au><au>Diochnos, Dimitrios I.</au><au>Mahmoody, Mohammad</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning under p-tampering poisoning attacks</atitle><jtitle>Annals of mathematics and artificial intelligence</jtitle><stitle>Ann Math Artif Intell</stitle><date>2020-07-01</date><risdate>2020</risdate><volume>88</volume><issue>7</issue><spage>759</spage><epage>792</epage><pages>759-792</pages><issn>1012-2443</issn><eissn>1573-7470</eissn><abstract>Recently, Mahloujifar and Mahmoody (Theory of Cryptography Conference’17) studied attacks against learning algorithms using a special case of Valiant’s malicious noise, called p -tampering, in which the adversary gets to change any training example with independent probability p but is limited to only choose ‘adversarial’ examples with correct labels. They obtained p -tampering attacks that increase the error probability in the so called ‘targeted’ poisoning model in which the adversary’s goal is to increase the loss of the trained hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the expected value of any bounded real-output function through p -tampering. In this work, we present new biasing attacks for increasing the expected value of bounded real-valued functions. Our improved biasing attacks, directly imply improved p -tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis. We also study the possibility of PAC learning under p -tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary’s goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p -tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under ‘no-mistake’ adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such ‘bounded-budget’ tampering attackers is inspired by the notions of adaptive corruption in cryptography.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><doi>10.1007/s10472-019-09675-1</doi><tpages>34</tpages><orcidid>https://orcid.org/0000-0002-6839-4697</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1012-2443
ispartof Annals of mathematics and artificial intelligence, 2020-07, Vol.88 (7), p.759-792
issn 1012-2443
1573-7470
language eng
recordid cdi_proquest_journals_2918202915
source SpringerNature Journals; ProQuest Central UK/Ireland; ProQuest Central
subjects Algorithms
Analysis
Artificial Intelligence
Complex Systems
Computer Science
Cryptography
Data mining
Error correction
Expected values
Hate crimes
Hypotheses
Machine learning
Mathematical functions
Mathematics
Poisoning
title Learning under p-tampering poisoning attacks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T12%3A44%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20under%20p-tampering%20poisoning%20attacks&rft.jtitle=Annals%20of%20mathematics%20and%20artificial%20intelligence&rft.au=Mahloujifar,%20Saeed&rft.date=2020-07-01&rft.volume=88&rft.issue=7&rft.spage=759&rft.epage=792&rft.pages=759-792&rft.issn=1012-2443&rft.eissn=1573-7470&rft_id=info:doi/10.1007/s10472-019-09675-1&rft_dat=%3Cgale_proqu%3EA717494101%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918202915&rft_id=info:pmid/&rft_galeid=A717494101&rfr_iscdi=true