Adversarial Robustness on Image Classification with $k$-means
In this paper we explore the challenges and strategies for enhancing the robustness of $k$-means clustering algorithms against adversarial manipulations. We evaluate the vulnerability of clustering algorithms to adversarial attacks, emphasising the associated security risks. Our study investigates t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Omari, Rollin Kim, Junae Montague, Paul |
description | In this paper we explore the challenges and strategies for enhancing the
robustness of $k$-means clustering algorithms against adversarial
manipulations. We evaluate the vulnerability of clustering algorithms to
adversarial attacks, emphasising the associated security risks. Our study
investigates the impact of incremental attack strength on training, introduces
the concept of transferability between supervised and unsupervised models, and
highlights the sensitivity of unsupervised models to sample distributions. We
additionally introduce and evaluate an adversarial training method that
improves testing performance in adversarial scenarios, and we highlight the
importance of various parameters in the proposed training method, such as
continuous learning, centroid initialisation, and adversarial step-count. |
doi_str_mv | 10.48550/arxiv.2312.09533 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_09533</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_09533</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-be96cabac8cd6cfa7c259e53545c4eccd782014d93407c0c95cbcd71134c1f383</originalsourceid><addsrcrecordid>eNotj71uwjAURr10QNAHYMIDa1I7107ioQOKWkBCqoTYo5sbByySUNkpLW_PX6dPOsOncxibShGrXGvxhv7PneMEZBILowFG7H1Rn60P6B22fHuqfsLQ2xD4qefrDveWFy2G4BpHOLgb_HXDgc-P86iz2IcJe2mwDfb1f8ds9_mxK1bR5mu5LhabCNMMosqalLBCyqlOqcGMEm2sBq00KUtUZ3kipKoNKJGRIKOpukEpQZFsIIcxmz1vH_7lt3cd-kt57ygfHXAFQHtC-w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Adversarial Robustness on Image Classification with $k$-means</title><source>arXiv.org</source><creator>Omari, Rollin ; Kim, Junae ; Montague, Paul</creator><creatorcontrib>Omari, Rollin ; Kim, Junae ; Montague, Paul</creatorcontrib><description>In this paper we explore the challenges and strategies for enhancing the
robustness of $k$-means clustering algorithms against adversarial
manipulations. We evaluate the vulnerability of clustering algorithms to
adversarial attacks, emphasising the associated security risks. Our study
investigates the impact of incremental attack strength on training, introduces
the concept of transferability between supervised and unsupervised models, and
highlights the sensitivity of unsupervised models to sample distributions. We
additionally introduce and evaluate an adversarial training method that
improves testing performance in adversarial scenarios, and we highlight the
importance of various parameters in the proposed training method, such as
continuous learning, centroid initialisation, and adversarial step-count.</description><identifier>DOI: 10.48550/arxiv.2312.09533</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Cryptography and Security ; Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.09533$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.09533$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/ACCESS.2024.3365517$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Omari, Rollin</creatorcontrib><creatorcontrib>Kim, Junae</creatorcontrib><creatorcontrib>Montague, Paul</creatorcontrib><title>Adversarial Robustness on Image Classification with $k$-means</title><description>In this paper we explore the challenges and strategies for enhancing the
robustness of $k$-means clustering algorithms against adversarial
manipulations. We evaluate the vulnerability of clustering algorithms to
adversarial attacks, emphasising the associated security risks. Our study
investigates the impact of incremental attack strength on training, introduces
the concept of transferability between supervised and unsupervised models, and
highlights the sensitivity of unsupervised models to sample distributions. We
additionally introduce and evaluate an adversarial training method that
improves testing performance in adversarial scenarios, and we highlight the
importance of various parameters in the proposed training method, such as
continuous learning, centroid initialisation, and adversarial step-count.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAURr10QNAHYMIDa1I7107ioQOKWkBCqoTYo5sbByySUNkpLW_PX6dPOsOncxibShGrXGvxhv7PneMEZBILowFG7H1Rn60P6B22fHuqfsLQ2xD4qefrDveWFy2G4BpHOLgb_HXDgc-P86iz2IcJe2mwDfb1f8ds9_mxK1bR5mu5LhabCNMMosqalLBCyqlOqcGMEm2sBq00KUtUZ3kipKoNKJGRIKOpukEpQZFsIIcxmz1vH_7lt3cd-kt57ygfHXAFQHtC-w</recordid><startdate>20231214</startdate><enddate>20231214</enddate><creator>Omari, Rollin</creator><creator>Kim, Junae</creator><creator>Montague, Paul</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231214</creationdate><title>Adversarial Robustness on Image Classification with $k$-means</title><author>Omari, Rollin ; Kim, Junae ; Montague, Paul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-be96cabac8cd6cfa7c259e53545c4eccd782014d93407c0c95cbcd71134c1f383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Omari, Rollin</creatorcontrib><creatorcontrib>Kim, Junae</creatorcontrib><creatorcontrib>Montague, Paul</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Omari, Rollin</au><au>Kim, Junae</au><au>Montague, Paul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Robustness on Image Classification with $k$-means</atitle><date>2023-12-14</date><risdate>2023</risdate><abstract>In this paper we explore the challenges and strategies for enhancing the
robustness of $k$-means clustering algorithms against adversarial
manipulations. We evaluate the vulnerability of clustering algorithms to
adversarial attacks, emphasising the associated security risks. Our study
investigates the impact of incremental attack strength on training, introduces
the concept of transferability between supervised and unsupervised models, and
highlights the sensitivity of unsupervised models to sample distributions. We
additionally introduce and evaluate an adversarial training method that
improves testing performance in adversarial scenarios, and we highlight the
importance of various parameters in the proposed training method, such as
continuous learning, centroid initialisation, and adversarial step-count.</abstract><doi>10.48550/arxiv.2312.09533</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2312.09533 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2312_09533 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Cryptography and Security Computer Science - Learning Computer Science - Neural and Evolutionary Computing |
title | Adversarial Robustness on Image Classification with $k$-means |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T20%3A49%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Robustness%20on%20Image%20Classification%20with%20$k$-means&rft.au=Omari,%20Rollin&rft.date=2023-12-14&rft_id=info:doi/10.48550/arxiv.2312.09533&rft_dat=%3Carxiv_GOX%3E2312_09533%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |