Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning

Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of signal processing systems 2023-12, Vol.95 (12), p.1425-1437
Hauptverfasser: Chen, Wencheng, Li, Hongyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1437
container_issue 12
container_start_page 1425
container_title Journal of signal processing systems
container_volume 95
creator Chen, Wencheng
Li, Hongyu
description Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus on the optimization of the training process such as using low-cost chips or optimizing the training settings for better energy efficiency. In this paper, we revisit this issue from an adversary perspective which attempts to maliciously make victims waste more training efforts without being noticed. In particular, we propose a novel attack targeting enlarging the training costs stealthily via poisoning the training data. By adopting the Projected Gradient Descent (PGD) method to generate poisoned samples, we show that attackers can significantly increase the training costs by as much as 88% in both the white-box scenario and the black-box scenario with a very tiny influence on the model’s accuracy.
doi_str_mv 10.1007/s11265-023-01895-3
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2916039438</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2916039438</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-bb2830dccc8a3f25ea63b0a9cb5e6b6cc4ff52d9949beb7d23d8aa874a337de3</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EEqXwB5gsMRv8ESf2WJVSkCp1aHfLcS4hpXWC7Q7996QExMZ0p7v3Q3oQumf0kVFaPEXGeC4J5YJQprQk4gJNmBaaKMbk5e8-_K7RTYw7SnNaSDZB600Cu0_vJ7zwEJoTnnc-Hg99ajtPutCCT1DhWUrWfUTcebwNtvWtb_Am2QZw6_EzQI9XYMP5fIuuaruPcPczp2j7stjOX8lqvXybz1bE8YImUpZcCVo555QVNZdgc1FSq10pIS9z57K6lrzSOtMllEXFRaWsVUVmhSgqEFP0MMb2ofs8Qkxm1x2DHxoN1yynQmdCDSo-qlzoYgxQmz60BxtOhlFz5mZGbmbgZr65GTGYxGiKg9g3EP6i_3F9AfG5cXI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2916039438</pqid></control><display><type>article</type><title>Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning</title><source>Springer Nature - Complete Springer Journals</source><creator>Chen, Wencheng ; Li, Hongyu</creator><creatorcontrib>Chen, Wencheng ; Li, Hongyu</creatorcontrib><description>Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus on the optimization of the training process such as using low-cost chips or optimizing the training settings for better energy efficiency. In this paper, we revisit this issue from an adversary perspective which attempts to maliciously make victims waste more training efforts without being noticed. In particular, we propose a novel attack targeting enlarging the training costs stealthily via poisoning the training data. By adopting the Projected Gradient Descent (PGD) method to generate poisoned samples, we show that attackers can significantly increase the training costs by as much as 88% in both the white-box scenario and the black-box scenario with a very tiny influence on the model’s accuracy.</description><identifier>ISSN: 1939-8018</identifier><identifier>EISSN: 1939-8115</identifier><identifier>DOI: 10.1007/s11265-023-01895-3</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Circuits and Systems ; Computer Imaging ; Deep learning ; Electrical Engineering ; Energy consumption ; Engineering ; Face recognition ; Image Processing and Computer Vision ; Machine learning ; Natural language processing ; Pattern Recognition ; Pattern Recognition and Graphics ; Power consumption ; Signal,Image and Speech Processing ; Vision</subject><ispartof>Journal of signal processing systems, 2023-12, Vol.95 (12), p.1425-1437</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-bb2830dccc8a3f25ea63b0a9cb5e6b6cc4ff52d9949beb7d23d8aa874a337de3</cites><orcidid>0009-0008-7976-5431</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11265-023-01895-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11265-023-01895-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Chen, Wencheng</creatorcontrib><creatorcontrib>Li, Hongyu</creatorcontrib><title>Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning</title><title>Journal of signal processing systems</title><addtitle>J Sign Process Syst</addtitle><description>Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus on the optimization of the training process such as using low-cost chips or optimizing the training settings for better energy efficiency. In this paper, we revisit this issue from an adversary perspective which attempts to maliciously make victims waste more training efforts without being noticed. In particular, we propose a novel attack targeting enlarging the training costs stealthily via poisoning the training data. By adopting the Projected Gradient Descent (PGD) method to generate poisoned samples, we show that attackers can significantly increase the training costs by as much as 88% in both the white-box scenario and the black-box scenario with a very tiny influence on the model’s accuracy.</description><subject>Circuits and Systems</subject><subject>Computer Imaging</subject><subject>Deep learning</subject><subject>Electrical Engineering</subject><subject>Energy consumption</subject><subject>Engineering</subject><subject>Face recognition</subject><subject>Image Processing and Computer Vision</subject><subject>Machine learning</subject><subject>Natural language processing</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Power consumption</subject><subject>Signal,Image and Speech Processing</subject><subject>Vision</subject><issn>1939-8018</issn><issn>1939-8115</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAQhi0EEqXwB5gsMRv8ESf2WJVSkCp1aHfLcS4hpXWC7Q7996QExMZ0p7v3Q3oQumf0kVFaPEXGeC4J5YJQprQk4gJNmBaaKMbk5e8-_K7RTYw7SnNaSDZB600Cu0_vJ7zwEJoTnnc-Hg99ajtPutCCT1DhWUrWfUTcebwNtvWtb_Am2QZw6_EzQI9XYMP5fIuuaruPcPczp2j7stjOX8lqvXybz1bE8YImUpZcCVo555QVNZdgc1FSq10pIS9z57K6lrzSOtMllEXFRaWsVUVmhSgqEFP0MMb2ofs8Qkxm1x2DHxoN1yynQmdCDSo-qlzoYgxQmz60BxtOhlFz5mZGbmbgZr65GTGYxGiKg9g3EP6i_3F9AfG5cXI</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Chen, Wencheng</creator><creator>Li, Hongyu</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0009-0008-7976-5431</orcidid></search><sort><creationdate>20231201</creationdate><title>Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning</title><author>Chen, Wencheng ; Li, Hongyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-bb2830dccc8a3f25ea63b0a9cb5e6b6cc4ff52d9949beb7d23d8aa874a337de3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Circuits and Systems</topic><topic>Computer Imaging</topic><topic>Deep learning</topic><topic>Electrical Engineering</topic><topic>Energy consumption</topic><topic>Engineering</topic><topic>Face recognition</topic><topic>Image Processing and Computer Vision</topic><topic>Machine learning</topic><topic>Natural language processing</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Power consumption</topic><topic>Signal,Image and Speech Processing</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Wencheng</creatorcontrib><creatorcontrib>Li, Hongyu</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of signal processing systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Wencheng</au><au>Li, Hongyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning</atitle><jtitle>Journal of signal processing systems</jtitle><stitle>J Sign Process Syst</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>95</volume><issue>12</issue><spage>1425</spage><epage>1437</epage><pages>1425-1437</pages><issn>1939-8018</issn><eissn>1939-8115</eissn><abstract>Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus on the optimization of the training process such as using low-cost chips or optimizing the training settings for better energy efficiency. In this paper, we revisit this issue from an adversary perspective which attempts to maliciously make victims waste more training efforts without being noticed. In particular, we propose a novel attack targeting enlarging the training costs stealthily via poisoning the training data. By adopting the Projected Gradient Descent (PGD) method to generate poisoned samples, we show that attackers can significantly increase the training costs by as much as 88% in both the white-box scenario and the black-box scenario with a very tiny influence on the model’s accuracy.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11265-023-01895-3</doi><tpages>13</tpages><orcidid>https://orcid.org/0009-0008-7976-5431</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1939-8018
ispartof Journal of signal processing systems, 2023-12, Vol.95 (12), p.1425-1437
issn 1939-8018
1939-8115
language eng
recordid cdi_proquest_journals_2916039438
source Springer Nature - Complete Springer Journals
subjects Circuits and Systems
Computer Imaging
Deep learning
Electrical Engineering
Energy consumption
Engineering
Face recognition
Image Processing and Computer Vision
Machine learning
Natural language processing
Pattern Recognition
Pattern Recognition and Graphics
Power consumption
Signal,Image and Speech Processing
Vision
title Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T00%3A25%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stealthy%20Energy%20Consumption-oriented%20Attacks%20on%20Training%20Stage%20in%20Deep%20Learning&rft.jtitle=Journal%20of%20signal%20processing%20systems&rft.au=Chen,%20Wencheng&rft.date=2023-12-01&rft.volume=95&rft.issue=12&rft.spage=1425&rft.epage=1437&rft.pages=1425-1437&rft.issn=1939-8018&rft.eissn=1939-8115&rft_id=info:doi/10.1007/s11265-023-01895-3&rft_dat=%3Cproquest_cross%3E2916039438%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2916039438&rft_id=info:pmid/&rfr_iscdi=true