Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack
Model extraction increasingly attracts research attentions as keeping commercial AI models private can retain a competitive advantage. In some scenarios, AI models are trained proprietarily, where neither pre-trained models nor sufficient in-distribution data is publicly available. Model extraction...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhang, Xinyi Fang, Chengfang Shi, Jie |
description | Model extraction increasingly attracts research attentions as keeping
commercial AI models private can retain a competitive advantage. In some
scenarios, AI models are trained proprietarily, where neither pre-trained
models nor sufficient in-distribution data is publicly available. Model
extraction attacks against these models are typically more devastating.
Therefore, in this paper, we empirically investigate the behaviors of model
extraction under such scenarios. We find the effectiveness of existing
techniques significantly affected by the absence of pre-trained models. In
addition, the impacts of the attacker's hyperparameters, e.g. model
architecture and optimizer, as well as the utilities of information retrieved
from queries, are counterintuitive. We provide some insights on explaining the
possible causes of these phenomena. With these observations, we formulate model
extraction attacks into an adaptive framework that captures these factors with
deep reinforcement learning. Experiments show that the proposed framework can
be used to improve existing techniques, and show that model extraction is still
possible in such strict scenarios. Our research can help system designers to
construct better defense strategies based on their scenarios. |
doi_str_mv | 10.48550/arxiv.2104.05921 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2104_05921</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2104_05921</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-15297241146455014aac6796c44a54039354e409bac88a7342db143d4cad95f63</originalsourceid><addsrcrecordid>eNotz8FOwzAQBFBfOKDCB3DqfgAJdrx2Ym6lKqVSEVIVhDhFW9shFiVBjoHy95TCaQ4jjeYxdiF4jpVS_IriPnzmheCYc2UKcco2dRd8ewk3_ouih6GFp44SLH2C5-ED6s5Hfw31cGjdCI-983FM1LvQv8D94PwOFvsUyaYw9DBLiezrGTtpaTf68_-csPp2Uc_vsvXDcjWfrTPSpciEKkxZoBCo8XBNIJHVpdEWkRRyaaRCj9xsyVYVlRILtxUoHVpyRrVaTtj0b_aIat5jeKP43fzimiNO_gCceEfe</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack</title><source>arXiv.org</source><creator>Zhang, Xinyi ; Fang, Chengfang ; Shi, Jie</creator><creatorcontrib>Zhang, Xinyi ; Fang, Chengfang ; Shi, Jie</creatorcontrib><description>Model extraction increasingly attracts research attentions as keeping
commercial AI models private can retain a competitive advantage. In some
scenarios, AI models are trained proprietarily, where neither pre-trained
models nor sufficient in-distribution data is publicly available. Model
extraction attacks against these models are typically more devastating.
Therefore, in this paper, we empirically investigate the behaviors of model
extraction under such scenarios. We find the effectiveness of existing
techniques significantly affected by the absence of pre-trained models. In
addition, the impacts of the attacker's hyperparameters, e.g. model
architecture and optimizer, as well as the utilities of information retrieved
from queries, are counterintuitive. We provide some insights on explaining the
possible causes of these phenomena. With these observations, we formulate model
extraction attacks into an adaptive framework that captures these factors with
deep reinforcement learning. Experiments show that the proposed framework can
be used to improve existing techniques, and show that model extraction is still
possible in such strict scenarios. Our research can help system designers to
construct better defense strategies based on their scenarios.</description><identifier>DOI: 10.48550/arxiv.2104.05921</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2021-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2104.05921$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2104.05921$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Xinyi</creatorcontrib><creatorcontrib>Fang, Chengfang</creatorcontrib><creatorcontrib>Shi, Jie</creatorcontrib><title>Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack</title><description>Model extraction increasingly attracts research attentions as keeping
commercial AI models private can retain a competitive advantage. In some
scenarios, AI models are trained proprietarily, where neither pre-trained
models nor sufficient in-distribution data is publicly available. Model
extraction attacks against these models are typically more devastating.
Therefore, in this paper, we empirically investigate the behaviors of model
extraction under such scenarios. We find the effectiveness of existing
techniques significantly affected by the absence of pre-trained models. In
addition, the impacts of the attacker's hyperparameters, e.g. model
architecture and optimizer, as well as the utilities of information retrieved
from queries, are counterintuitive. We provide some insights on explaining the
possible causes of these phenomena. With these observations, we formulate model
extraction attacks into an adaptive framework that captures these factors with
deep reinforcement learning. Experiments show that the proposed framework can
be used to improve existing techniques, and show that model extraction is still
possible in such strict scenarios. Our research can help system designers to
construct better defense strategies based on their scenarios.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8FOwzAQBFBfOKDCB3DqfgAJdrx2Ym6lKqVSEVIVhDhFW9shFiVBjoHy95TCaQ4jjeYxdiF4jpVS_IriPnzmheCYc2UKcco2dRd8ewk3_ouih6GFp44SLH2C5-ED6s5Hfw31cGjdCI-983FM1LvQv8D94PwOFvsUyaYw9DBLiezrGTtpaTf68_-csPp2Uc_vsvXDcjWfrTPSpciEKkxZoBCo8XBNIJHVpdEWkRRyaaRCj9xsyVYVlRILtxUoHVpyRrVaTtj0b_aIat5jeKP43fzimiNO_gCceEfe</recordid><startdate>20210412</startdate><enddate>20210412</enddate><creator>Zhang, Xinyi</creator><creator>Fang, Chengfang</creator><creator>Shi, Jie</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210412</creationdate><title>Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack</title><author>Zhang, Xinyi ; Fang, Chengfang ; Shi, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-15297241146455014aac6796c44a54039354e409bac88a7342db143d4cad95f63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Xinyi</creatorcontrib><creatorcontrib>Fang, Chengfang</creatorcontrib><creatorcontrib>Shi, Jie</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Xinyi</au><au>Fang, Chengfang</au><au>Shi, Jie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack</atitle><date>2021-04-12</date><risdate>2021</risdate><abstract>Model extraction increasingly attracts research attentions as keeping
commercial AI models private can retain a competitive advantage. In some
scenarios, AI models are trained proprietarily, where neither pre-trained
models nor sufficient in-distribution data is publicly available. Model
extraction attacks against these models are typically more devastating.
Therefore, in this paper, we empirically investigate the behaviors of model
extraction under such scenarios. We find the effectiveness of existing
techniques significantly affected by the absence of pre-trained models. In
addition, the impacts of the attacker's hyperparameters, e.g. model
architecture and optimizer, as well as the utilities of information retrieved
from queries, are counterintuitive. We provide some insights on explaining the
possible causes of these phenomena. With these observations, we formulate model
extraction attacks into an adaptive framework that captures these factors with
deep reinforcement learning. Experiments show that the proposed framework can
be used to improve existing techniques, and show that model extraction is still
possible in such strict scenarios. Our research can help system designers to
construct better defense strategies based on their scenarios.</abstract><doi>10.48550/arxiv.2104.05921</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2104.05921 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2104_05921 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Cryptography and Security Computer Science - Learning |
title | Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T19%3A36%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Thief,%20Beware%20of%20What%20Get%20You%20There:%20Towards%20Understanding%20Model%20Extraction%20Attack&rft.au=Zhang,%20Xinyi&rft.date=2021-04-12&rft_id=info:doi/10.48550/arxiv.2104.05921&rft_dat=%3Carxiv_GOX%3E2104_05921%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |