Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense
Availability poisons exploit supervised learning (SL) algorithms by introducing class-related shortcut features in images such that models trained on poisoned data are useless for real-world datasets. Self-supervised learning (SSL), which utilizes augmentations to learn instance discrimination, is r...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-09 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Styborski, Jeremy Lyu, Mingzhi Huang, Yi Adams, Kong |
description | Availability poisons exploit supervised learning (SL) algorithms by introducing class-related shortcut features in images such that models trained on poisoned data are useless for real-world datasets. Self-supervised learning (SSL), which utilizes augmentations to learn instance discrimination, is regarded as a strong defense against poisoned data. However, by extending the study of SSL across multiple poisons on the CIFAR-10 and ImageNet-100 datasets, we demonstrate that it often performs poorly, far below that of training on clean data. Leveraging the vulnerability of SL to poison attacks, we introduce adversarial training (AT) on SL to obfuscate poison features and guide robust feature learning for SSL. Our proposed defense, designated VESPR (Vulnerability Exploitation of Supervised Poisoning for Robust SSL), surpasses the performance of six previous defenses across seven popular availability poisons. VESPR displays superior performance over all previous defenses, boosting the minimum and average ImageNet-100 test accuracies of poisoned models by 16% and 9%, respectively. Through analysis and ablation studies, we elucidate the mechanisms by which VESPR learns robust class features. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3105553772</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3105553772</sourcerecordid><originalsourceid>FETCH-proquest_journals_31055537723</originalsourceid><addsrcrecordid>eNqNjrEKwjAURYMgWLT_EHAupImx7lpxLFRcS8XXmhJeal4i-vd2cHB0usM5B-6MJVKpPNttpFywlGgQQshtIbVWCavK12idCQZ7XscR_NMQ3HjlDDnkl2gRfHs11oQ3D47XwQP24Q7Ia7Bd9pMcoAMkWLF511qC9LtLtj6W5_0pG717RKDQDC56nFCjcqGnE0Uh1X_WB-FfP9A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3105553772</pqid></control><display><type>article</type><title>Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense</title><source>Free E- Journals</source><creator>Styborski, Jeremy ; Lyu, Mingzhi ; Huang, Yi ; Adams, Kong</creator><creatorcontrib>Styborski, Jeremy ; Lyu, Mingzhi ; Huang, Yi ; Adams, Kong</creatorcontrib><description>Availability poisons exploit supervised learning (SL) algorithms by introducing class-related shortcut features in images such that models trained on poisoned data are useless for real-world datasets. Self-supervised learning (SSL), which utilizes augmentations to learn instance discrimination, is regarded as a strong defense against poisoned data. However, by extending the study of SSL across multiple poisons on the CIFAR-10 and ImageNet-100 datasets, we demonstrate that it often performs poorly, far below that of training on clean data. Leveraging the vulnerability of SL to poison attacks, we introduce adversarial training (AT) on SL to obfuscate poison features and guide robust feature learning for SSL. Our proposed defense, designated VESPR (Vulnerability Exploitation of Supervised Poisoning for Robust SSL), surpasses the performance of six previous defenses across seven popular availability poisons. VESPR displays superior performance over all previous defenses, boosting the minimum and average ImageNet-100 test accuracies of poisoned models by 16% and 9%, respectively. Through analysis and ablation studies, we elucidate the mechanisms by which VESPR learns robust class features.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Algorithms ; Availability ; Datasets ; Machine learning ; Poisoning ; Poisons ; Robustness ; Self-supervised learning</subject><ispartof>arXiv.org, 2024-09</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Styborski, Jeremy</creatorcontrib><creatorcontrib>Lyu, Mingzhi</creatorcontrib><creatorcontrib>Huang, Yi</creatorcontrib><creatorcontrib>Adams, Kong</creatorcontrib><title>Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense</title><title>arXiv.org</title><description>Availability poisons exploit supervised learning (SL) algorithms by introducing class-related shortcut features in images such that models trained on poisoned data are useless for real-world datasets. Self-supervised learning (SSL), which utilizes augmentations to learn instance discrimination, is regarded as a strong defense against poisoned data. However, by extending the study of SSL across multiple poisons on the CIFAR-10 and ImageNet-100 datasets, we demonstrate that it often performs poorly, far below that of training on clean data. Leveraging the vulnerability of SL to poison attacks, we introduce adversarial training (AT) on SL to obfuscate poison features and guide robust feature learning for SSL. Our proposed defense, designated VESPR (Vulnerability Exploitation of Supervised Poisoning for Robust SSL), surpasses the performance of six previous defenses across seven popular availability poisons. VESPR displays superior performance over all previous defenses, boosting the minimum and average ImageNet-100 test accuracies of poisoned models by 16% and 9%, respectively. Through analysis and ablation studies, we elucidate the mechanisms by which VESPR learns robust class features.</description><subject>Ablation</subject><subject>Algorithms</subject><subject>Availability</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Poisoning</subject><subject>Poisons</subject><subject>Robustness</subject><subject>Self-supervised learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjrEKwjAURYMgWLT_EHAupImx7lpxLFRcS8XXmhJeal4i-vd2cHB0usM5B-6MJVKpPNttpFywlGgQQshtIbVWCavK12idCQZ7XscR_NMQ3HjlDDnkl2gRfHs11oQ3D47XwQP24Q7Ia7Bd9pMcoAMkWLF511qC9LtLtj6W5_0pG717RKDQDC56nFCjcqGnE0Uh1X_WB-FfP9A</recordid><startdate>20240913</startdate><enddate>20240913</enddate><creator>Styborski, Jeremy</creator><creator>Lyu, Mingzhi</creator><creator>Huang, Yi</creator><creator>Adams, Kong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240913</creationdate><title>Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense</title><author>Styborski, Jeremy ; Lyu, Mingzhi ; Huang, Yi ; Adams, Kong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31055537723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ablation</topic><topic>Algorithms</topic><topic>Availability</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Poisoning</topic><topic>Poisons</topic><topic>Robustness</topic><topic>Self-supervised learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Styborski, Jeremy</creatorcontrib><creatorcontrib>Lyu, Mingzhi</creatorcontrib><creatorcontrib>Huang, Yi</creatorcontrib><creatorcontrib>Adams, Kong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Styborski, Jeremy</au><au>Lyu, Mingzhi</au><au>Huang, Yi</au><au>Adams, Kong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense</atitle><jtitle>arXiv.org</jtitle><date>2024-09-13</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Availability poisons exploit supervised learning (SL) algorithms by introducing class-related shortcut features in images such that models trained on poisoned data are useless for real-world datasets. Self-supervised learning (SSL), which utilizes augmentations to learn instance discrimination, is regarded as a strong defense against poisoned data. However, by extending the study of SSL across multiple poisons on the CIFAR-10 and ImageNet-100 datasets, we demonstrate that it often performs poorly, far below that of training on clean data. Leveraging the vulnerability of SL to poison attacks, we introduce adversarial training (AT) on SL to obfuscate poison features and guide robust feature learning for SSL. Our proposed defense, designated VESPR (Vulnerability Exploitation of Supervised Poisoning for Robust SSL), surpasses the performance of six previous defenses across seven popular availability poisons. VESPR displays superior performance over all previous defenses, boosting the minimum and average ImageNet-100 test accuracies of poisoned models by 16% and 9%, respectively. Through analysis and ablation studies, we elucidate the mechanisms by which VESPR learns robust class features.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3105553772 |
source | Free E- Journals |
subjects | Ablation Algorithms Availability Datasets Machine learning Poisoning Poisons Robustness Self-supervised learning |
title | Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T05%3A59%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Exploiting%20Supervised%20Poison%20Vulnerability%20to%20Strengthen%20Self-Supervised%20Defense&rft.jtitle=arXiv.org&rft.au=Styborski,%20Jeremy&rft.date=2024-09-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3105553772%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3105553772&rft_id=info:pmid/&rfr_iscdi=true |