Open DNN Box by Power Side-Channel Attack

Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the bl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Xiang, Yun, Chen, Zhuangzhi, Chen, Zuohui, Fang, Zebin, Hao, Haiyang, Chen, Jinyin, Liu, Yi, Wu, Zhefu, Xuan, Qi, Yang, Xiaoniu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Xiang, Yun
Chen, Zhuangzhi
Chen, Zuohui
Fang, Zebin
Hao, Haiyang
Chen, Jinyin
Liu, Yi
Wu, Zhefu
Xuan, Qi
Yang, Xiaoniu
description Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50\% accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
doi_str_mv 10.48550/arxiv.1907.10406
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1907_10406</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1907_10406</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-70616eb52aebe60572616cb60c6703e78e7cf8e1f7bd64c5e89a76a8c02c13093</originalsourceid><addsrcrecordid>eNotzrtuwkAQheFtKCKSB0iVbVPYmbW9M-sSHC6REESKe2t2PVYswCAHcXl7Ekh19DdHn1LPBuLMWQtv3J_bY2xyoNhABvigXld76fT7cqnHu7P2F_25O0mvv9paouKbu042enQ4cFg_qkHDmx95-t-hKqeTsphHi9XsoxgtIkbCiAANircJixcES8lvB48QkCAVckKhcWIa8jVmwYrLmZBdgCSYFPJ0qF7utzdrte_bLfeX6s9c3czpFSGLOYo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Open DNN Box by Power Side-Channel Attack</title><source>arXiv.org</source><creator>Xiang, Yun ; Chen, Zhuangzhi ; Chen, Zuohui ; Fang, Zebin ; Hao, Haiyang ; Chen, Jinyin ; Liu, Yi ; Wu, Zhefu ; Xuan, Qi ; Yang, Xiaoniu</creator><creatorcontrib>Xiang, Yun ; Chen, Zhuangzhi ; Chen, Zuohui ; Fang, Zebin ; Hao, Haiyang ; Chen, Jinyin ; Liu, Yi ; Wu, Zhefu ; Xuan, Qi ; Yang, Xiaoniu</creatorcontrib><description>Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50\% accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.</description><identifier>DOI: 10.48550/arxiv.1907.10406</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2019-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1907.10406$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1907.10406$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xiang, Yun</creatorcontrib><creatorcontrib>Chen, Zhuangzhi</creatorcontrib><creatorcontrib>Chen, Zuohui</creatorcontrib><creatorcontrib>Fang, Zebin</creatorcontrib><creatorcontrib>Hao, Haiyang</creatorcontrib><creatorcontrib>Chen, Jinyin</creatorcontrib><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Wu, Zhefu</creatorcontrib><creatorcontrib>Xuan, Qi</creatorcontrib><creatorcontrib>Yang, Xiaoniu</creatorcontrib><title>Open DNN Box by Power Side-Channel Attack</title><description>Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50\% accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrtuwkAQheFtKCKSB0iVbVPYmbW9M-sSHC6REESKe2t2PVYswCAHcXl7Ekh19DdHn1LPBuLMWQtv3J_bY2xyoNhABvigXld76fT7cqnHu7P2F_25O0mvv9paouKbu042enQ4cFg_qkHDmx95-t-hKqeTsphHi9XsoxgtIkbCiAANircJixcES8lvB48QkCAVckKhcWIa8jVmwYrLmZBdgCSYFPJ0qF7utzdrte_bLfeX6s9c3czpFSGLOYo</recordid><startdate>20190721</startdate><enddate>20190721</enddate><creator>Xiang, Yun</creator><creator>Chen, Zhuangzhi</creator><creator>Chen, Zuohui</creator><creator>Fang, Zebin</creator><creator>Hao, Haiyang</creator><creator>Chen, Jinyin</creator><creator>Liu, Yi</creator><creator>Wu, Zhefu</creator><creator>Xuan, Qi</creator><creator>Yang, Xiaoniu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190721</creationdate><title>Open DNN Box by Power Side-Channel Attack</title><author>Xiang, Yun ; Chen, Zhuangzhi ; Chen, Zuohui ; Fang, Zebin ; Hao, Haiyang ; Chen, Jinyin ; Liu, Yi ; Wu, Zhefu ; Xuan, Qi ; Yang, Xiaoniu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-70616eb52aebe60572616cb60c6703e78e7cf8e1f7bd64c5e89a76a8c02c13093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiang, Yun</creatorcontrib><creatorcontrib>Chen, Zhuangzhi</creatorcontrib><creatorcontrib>Chen, Zuohui</creatorcontrib><creatorcontrib>Fang, Zebin</creatorcontrib><creatorcontrib>Hao, Haiyang</creatorcontrib><creatorcontrib>Chen, Jinyin</creatorcontrib><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Wu, Zhefu</creatorcontrib><creatorcontrib>Xuan, Qi</creatorcontrib><creatorcontrib>Yang, Xiaoniu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xiang, Yun</au><au>Chen, Zhuangzhi</au><au>Chen, Zuohui</au><au>Fang, Zebin</au><au>Hao, Haiyang</au><au>Chen, Jinyin</au><au>Liu, Yi</au><au>Wu, Zhefu</au><au>Xuan, Qi</au><au>Yang, Xiaoniu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Open DNN Box by Power Side-Channel Attack</atitle><date>2019-07-21</date><risdate>2019</risdate><abstract>Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50\% accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.</abstract><doi>10.48550/arxiv.1907.10406</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1907.10406
ispartof
issn
language eng
recordid cdi_arxiv_primary_1907_10406
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Open DNN Box by Power Side-Channel Attack
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T17%3A33%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Open%20DNN%20Box%20by%20Power%20Side-Channel%20Attack&rft.au=Xiang,%20Yun&rft.date=2019-07-21&rft_id=info:doi/10.48550/arxiv.1907.10406&rft_dat=%3Carxiv_GOX%3E1907_10406%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true