Improving Machine Learning Robustness via Adversarial Training

As Machine Learning (ML) is increasingly used in solving various tasks in real-world applications, it is crucial to ensure that ML algorithms are robust to any potential worst-case noises, adversarial attacks, and highly unusual situations when they are designed. Studying ML robustness will signific...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dang, Long, Hapuarachchi, Thushari, Xiong, Kaiqi, Lin, Jing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dang, Long
Hapuarachchi, Thushari
Xiong, Kaiqi
Lin, Jing
description As Machine Learning (ML) is increasingly used in solving various tasks in real-world applications, it is crucial to ensure that ML algorithms are robust to any potential worst-case noises, adversarial attacks, and highly unusual situations when they are designed. Studying ML robustness will significantly help in the design of ML algorithms. In this paper, we investigate ML robustness using adversarial training in centralized and decentralized environments, where ML training and testing are conducted in one or multiple computers. In the centralized environment, we achieve a test accuracy of 65.41% and 83.0% when classifying adversarial examples generated by Fast Gradient Sign Method and DeepFool, respectively. Comparing to existing studies, these results demonstrate an improvement of 18.41% for FGSM and 47% for DeepFool. In the decentralized environment, we study Federated learning (FL) robustness by using adversarial training with independent and identically distributed (IID) and non-IID data, respectively, where CIFAR-10 is used in this research. In the IID data case, our experimental results demonstrate that we can achieve such a robust accuracy that it is comparable to the one obtained in the centralized environment. Moreover, in the non-IID data case, the natural accuracy drops from 66.23% to 57.82%, and the robust accuracy decreases by 25% and 23.4% in C&W and Projected Gradient Descent (PGD) attacks, compared to the IID data case, respectively. We further propose an IID data-sharing approach, which allows for increasing the natural accuracy to 85.04% and the robust accuracy from 57% to 72% in C&W attacks and from 59% to 67% in PGD attacks.
doi_str_mv 10.48550/arxiv.2309.12593
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_12593</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_12593</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-aa7545c70e6ca487c670f607e004936a673c9494e8f057b77f3fe15037be7f5d3</originalsourceid><addsrcrecordid>eNotj8tqwzAQRbXpoiT9gK6qH7A7jh5jbQIh9BFwKRTvzVgZJYLECVJr2r8vTrK6cDkcOEI8VlDq2hh4pvQbx3KhwJXVwjh1L5ab4zmdxjjs5Af5fRxYNkxpmI6vU_-TvwfOWY6R5Go7csqUIh1kmyhOzFzcBTpkfrjtTLSvL-36vWg-3zbrVVOQRVUQodHGI7D1pGv0FiFYQAbQTtmJ8U47zXUAgz1iUIErAwp7xmC2aiaertpLQHdO8Ujpr5tCukuI-gd8eULX</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Improving Machine Learning Robustness via Adversarial Training</title><source>arXiv.org</source><creator>Dang, Long ; Hapuarachchi, Thushari ; Xiong, Kaiqi ; Lin, Jing</creator><creatorcontrib>Dang, Long ; Hapuarachchi, Thushari ; Xiong, Kaiqi ; Lin, Jing</creatorcontrib><description>As Machine Learning (ML) is increasingly used in solving various tasks in real-world applications, it is crucial to ensure that ML algorithms are robust to any potential worst-case noises, adversarial attacks, and highly unusual situations when they are designed. Studying ML robustness will significantly help in the design of ML algorithms. In this paper, we investigate ML robustness using adversarial training in centralized and decentralized environments, where ML training and testing are conducted in one or multiple computers. In the centralized environment, we achieve a test accuracy of 65.41% and 83.0% when classifying adversarial examples generated by Fast Gradient Sign Method and DeepFool, respectively. Comparing to existing studies, these results demonstrate an improvement of 18.41% for FGSM and 47% for DeepFool. In the decentralized environment, we study Federated learning (FL) robustness by using adversarial training with independent and identically distributed (IID) and non-IID data, respectively, where CIFAR-10 is used in this research. In the IID data case, our experimental results demonstrate that we can achieve such a robust accuracy that it is comparable to the one obtained in the centralized environment. Moreover, in the non-IID data case, the natural accuracy drops from 66.23% to 57.82%, and the robust accuracy decreases by 25% and 23.4% in C&amp;W and Projected Gradient Descent (PGD) attacks, compared to the IID data case, respectively. We further propose an IID data-sharing approach, which allows for increasing the natural accuracy to 85.04% and the robust accuracy from 57% to 72% in C&amp;W attacks and from 59% to 67% in PGD attacks.</description><identifier>DOI: 10.48550/arxiv.2309.12593</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.12593$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.12593$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dang, Long</creatorcontrib><creatorcontrib>Hapuarachchi, Thushari</creatorcontrib><creatorcontrib>Xiong, Kaiqi</creatorcontrib><creatorcontrib>Lin, Jing</creatorcontrib><title>Improving Machine Learning Robustness via Adversarial Training</title><description>As Machine Learning (ML) is increasingly used in solving various tasks in real-world applications, it is crucial to ensure that ML algorithms are robust to any potential worst-case noises, adversarial attacks, and highly unusual situations when they are designed. Studying ML robustness will significantly help in the design of ML algorithms. In this paper, we investigate ML robustness using adversarial training in centralized and decentralized environments, where ML training and testing are conducted in one or multiple computers. In the centralized environment, we achieve a test accuracy of 65.41% and 83.0% when classifying adversarial examples generated by Fast Gradient Sign Method and DeepFool, respectively. Comparing to existing studies, these results demonstrate an improvement of 18.41% for FGSM and 47% for DeepFool. In the decentralized environment, we study Federated learning (FL) robustness by using adversarial training with independent and identically distributed (IID) and non-IID data, respectively, where CIFAR-10 is used in this research. In the IID data case, our experimental results demonstrate that we can achieve such a robust accuracy that it is comparable to the one obtained in the centralized environment. Moreover, in the non-IID data case, the natural accuracy drops from 66.23% to 57.82%, and the robust accuracy decreases by 25% and 23.4% in C&amp;W and Projected Gradient Descent (PGD) attacks, compared to the IID data case, respectively. We further propose an IID data-sharing approach, which allows for increasing the natural accuracy to 85.04% and the robust accuracy from 57% to 72% in C&amp;W attacks and from 59% to 67% in PGD attacks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAQRbXpoiT9gK6qH7A7jh5jbQIh9BFwKRTvzVgZJYLECVJr2r8vTrK6cDkcOEI8VlDq2hh4pvQbx3KhwJXVwjh1L5ab4zmdxjjs5Af5fRxYNkxpmI6vU_-TvwfOWY6R5Go7csqUIh1kmyhOzFzcBTpkfrjtTLSvL-36vWg-3zbrVVOQRVUQodHGI7D1pGv0FiFYQAbQTtmJ8U47zXUAgz1iUIErAwp7xmC2aiaertpLQHdO8Ujpr5tCukuI-gd8eULX</recordid><startdate>20230921</startdate><enddate>20230921</enddate><creator>Dang, Long</creator><creator>Hapuarachchi, Thushari</creator><creator>Xiong, Kaiqi</creator><creator>Lin, Jing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230921</creationdate><title>Improving Machine Learning Robustness via Adversarial Training</title><author>Dang, Long ; Hapuarachchi, Thushari ; Xiong, Kaiqi ; Lin, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-aa7545c70e6ca487c670f607e004936a673c9494e8f057b77f3fe15037be7f5d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dang, Long</creatorcontrib><creatorcontrib>Hapuarachchi, Thushari</creatorcontrib><creatorcontrib>Xiong, Kaiqi</creatorcontrib><creatorcontrib>Lin, Jing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dang, Long</au><au>Hapuarachchi, Thushari</au><au>Xiong, Kaiqi</au><au>Lin, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Machine Learning Robustness via Adversarial Training</atitle><date>2023-09-21</date><risdate>2023</risdate><abstract>As Machine Learning (ML) is increasingly used in solving various tasks in real-world applications, it is crucial to ensure that ML algorithms are robust to any potential worst-case noises, adversarial attacks, and highly unusual situations when they are designed. Studying ML robustness will significantly help in the design of ML algorithms. In this paper, we investigate ML robustness using adversarial training in centralized and decentralized environments, where ML training and testing are conducted in one or multiple computers. In the centralized environment, we achieve a test accuracy of 65.41% and 83.0% when classifying adversarial examples generated by Fast Gradient Sign Method and DeepFool, respectively. Comparing to existing studies, these results demonstrate an improvement of 18.41% for FGSM and 47% for DeepFool. In the decentralized environment, we study Federated learning (FL) robustness by using adversarial training with independent and identically distributed (IID) and non-IID data, respectively, where CIFAR-10 is used in this research. In the IID data case, our experimental results demonstrate that we can achieve such a robust accuracy that it is comparable to the one obtained in the centralized environment. Moreover, in the non-IID data case, the natural accuracy drops from 66.23% to 57.82%, and the robust accuracy decreases by 25% and 23.4% in C&amp;W and Projected Gradient Descent (PGD) attacks, compared to the IID data case, respectively. We further propose an IID data-sharing approach, which allows for increasing the natural accuracy to 85.04% and the robust accuracy from 57% to 72% in C&amp;W attacks and from 59% to 67% in PGD attacks.</abstract><doi>10.48550/arxiv.2309.12593</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.12593
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_12593
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Cryptography and Security
Computer Science - Learning
title Improving Machine Learning Robustness via Adversarial Training
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T09%3A34%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Machine%20Learning%20Robustness%20via%20Adversarial%20Training&rft.au=Dang,%20Long&rft.date=2023-09-21&rft_id=info:doi/10.48550/arxiv.2309.12593&rft_dat=%3Carxiv_GOX%3E2309_12593%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true