Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions

With the unprecedented development of deep learning, autonomous vehicles (AVs) have achieved tremendous progress nowadays. However, AV supported by DNN models is vulnerable to data poisoning attacks, hindering the large-scale application of autonomous driving. For example, by injecting carefully des...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on industrial informatics 2023-01, Vol.19 (1), p.20-28
Hauptverfasser: Chen, Yanjiao, Zhu, Xiaotian, Gong, Xueluan, Yi, Xinjing, Li, Shuyang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 28
container_issue 1
container_start_page 20
container_title IEEE transactions on industrial informatics
container_volume 19
creator Chen, Yanjiao
Zhu, Xiaotian
Gong, Xueluan
Yi, Xinjing
Li, Shuyang
description With the unprecedented development of deep learning, autonomous vehicles (AVs) have achieved tremendous progress nowadays. However, AV supported by DNN models is vulnerable to data poisoning attacks, hindering the large-scale application of autonomous driving. For example, by injecting carefully designed poisons into the training dataset of the DNN model in the traffic sign recognition system, the attacker can mislead the system to make targeted misclassification or cause a reduction in model classification accuracy. In this article, we conduct a thorough investigation of the state-of-the-art data poisoning attacks and defenses against AVs. According to whether the attacker needs to manipulate the data labeling process, we divide the state-of-the-art attack approaches into two categories, i.e., dirty-label attacks and clean-label attacks. We also differentiate the existing defense methods into two categories based on whether to modify the training data or the models, i.e., data-based defenses and model-based defenses. In addition to a detailed review of attacks and defenses in each category, we also give a qualitative comparison of the existing attacks and defenses. Besides, we provide a quantitative comparison of the existing attack and defense methods through experiments. Last but not least, we pinpoint several future directions for data poisoning attacks and defenses in AVs, providing possible ways for further research.
doi_str_mv 10.1109/TII.2022.3198481
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TII_2022_3198481</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9855872</ieee_id><sourcerecordid>2734387339</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-8b20ac0f4bfc716b69c9c17dc57cfb294d1a659de867cc624c2567f09b02a4fb3</originalsourceid><addsrcrecordid>eNo9kM1LwzAYxosoOKd3wUvAq535aJrG29icFoYKVq8lTROXfSQzSdH993ZseHqfw-95XvglyTWCI4Qgv6_KcoQhxiOCeJEV6CQZIJ6hFEIKT_tMKUoJhuQ8uQhhCSFhkPBBYqYiCvDmTHDW2C8wjlHIVQDGgtJG5a2KqdPpp1oYuVbgRcUf51fhAVTi11m32d2B9yii2kPVQqVjH--AsC2YdbHzCkyNVzIaZ8NlcqbFOqir4x0mH7PHavKczl-fysl4nkrMUUyLBkMhoc4aLRnKm5xLLhFrJWVSN5hnLRI55a0qciZljjOJac405A3EItMNGSa3h92td9-dCrFeus7b_mWNGclIwQjhPQUPlPQuBK90vfVmI_yuRrDeC617ofVeaH0U2lduDhWjlPrHeUFpwTD5Axu3caU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2734387339</pqid></control><display><type>article</type><title>Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions</title><source>IEEE Electronic Library (IEL)</source><creator>Chen, Yanjiao ; Zhu, Xiaotian ; Gong, Xueluan ; Yi, Xinjing ; Li, Shuyang</creator><creatorcontrib>Chen, Yanjiao ; Zhu, Xiaotian ; Gong, Xueluan ; Yi, Xinjing ; Li, Shuyang</creatorcontrib><description>With the unprecedented development of deep learning, autonomous vehicles (AVs) have achieved tremendous progress nowadays. However, AV supported by DNN models is vulnerable to data poisoning attacks, hindering the large-scale application of autonomous driving. For example, by injecting carefully designed poisons into the training dataset of the DNN model in the traffic sign recognition system, the attacker can mislead the system to make targeted misclassification or cause a reduction in model classification accuracy. In this article, we conduct a thorough investigation of the state-of-the-art data poisoning attacks and defenses against AVs. According to whether the attacker needs to manipulate the data labeling process, we divide the state-of-the-art attack approaches into two categories, i.e., dirty-label attacks and clean-label attacks. We also differentiate the existing defense methods into two categories based on whether to modify the training data or the models, i.e., data-based defenses and model-based defenses. In addition to a detailed review of attacks and defenses in each category, we also give a qualitative comparison of the existing attacks and defenses. Besides, we provide a quantitative comparison of the existing attack and defense methods through experiments. Last but not least, we pinpoint several future directions for data poisoning attacks and defenses in AVs, providing possible ways for further research.</description><identifier>ISSN: 1551-3203</identifier><identifier>EISSN: 1941-0050</identifier><identifier>DOI: 10.1109/TII.2022.3198481</identifier><identifier>CODEN: ITIICH</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Automobiles ; Data models ; Data poisoning attacks ; Deep learning ; deep neural networks ; Feature extraction ; Internet of Vehicles ; Machine learning ; Model accuracy ; Object recognition ; Poisons ; Target recognition ; Taxonomy ; Toxicology ; Traffic models ; Traffic signs ; Training</subject><ispartof>IEEE transactions on industrial informatics, 2023-01, Vol.19 (1), p.20-28</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-8b20ac0f4bfc716b69c9c17dc57cfb294d1a659de867cc624c2567f09b02a4fb3</citedby><cites>FETCH-LOGICAL-c291t-8b20ac0f4bfc716b69c9c17dc57cfb294d1a659de867cc624c2567f09b02a4fb3</cites><orcidid>0000-0003-2190-8117 ; 0000-0002-1382-0679</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9855872$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9855872$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chen, Yanjiao</creatorcontrib><creatorcontrib>Zhu, Xiaotian</creatorcontrib><creatorcontrib>Gong, Xueluan</creatorcontrib><creatorcontrib>Yi, Xinjing</creatorcontrib><creatorcontrib>Li, Shuyang</creatorcontrib><title>Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions</title><title>IEEE transactions on industrial informatics</title><addtitle>TII</addtitle><description>With the unprecedented development of deep learning, autonomous vehicles (AVs) have achieved tremendous progress nowadays. However, AV supported by DNN models is vulnerable to data poisoning attacks, hindering the large-scale application of autonomous driving. For example, by injecting carefully designed poisons into the training dataset of the DNN model in the traffic sign recognition system, the attacker can mislead the system to make targeted misclassification or cause a reduction in model classification accuracy. In this article, we conduct a thorough investigation of the state-of-the-art data poisoning attacks and defenses against AVs. According to whether the attacker needs to manipulate the data labeling process, we divide the state-of-the-art attack approaches into two categories, i.e., dirty-label attacks and clean-label attacks. We also differentiate the existing defense methods into two categories based on whether to modify the training data or the models, i.e., data-based defenses and model-based defenses. In addition to a detailed review of attacks and defenses in each category, we also give a qualitative comparison of the existing attacks and defenses. Besides, we provide a quantitative comparison of the existing attack and defense methods through experiments. Last but not least, we pinpoint several future directions for data poisoning attacks and defenses in AVs, providing possible ways for further research.</description><subject>Automobiles</subject><subject>Data models</subject><subject>Data poisoning attacks</subject><subject>Deep learning</subject><subject>deep neural networks</subject><subject>Feature extraction</subject><subject>Internet of Vehicles</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Object recognition</subject><subject>Poisons</subject><subject>Target recognition</subject><subject>Taxonomy</subject><subject>Toxicology</subject><subject>Traffic models</subject><subject>Traffic signs</subject><subject>Training</subject><issn>1551-3203</issn><issn>1941-0050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM1LwzAYxosoOKd3wUvAq535aJrG29icFoYKVq8lTROXfSQzSdH993ZseHqfw-95XvglyTWCI4Qgv6_KcoQhxiOCeJEV6CQZIJ6hFEIKT_tMKUoJhuQ8uQhhCSFhkPBBYqYiCvDmTHDW2C8wjlHIVQDGgtJG5a2KqdPpp1oYuVbgRcUf51fhAVTi11m32d2B9yii2kPVQqVjH--AsC2YdbHzCkyNVzIaZ8NlcqbFOqir4x0mH7PHavKczl-fysl4nkrMUUyLBkMhoc4aLRnKm5xLLhFrJWVSN5hnLRI55a0qciZljjOJac405A3EItMNGSa3h92td9-dCrFeus7b_mWNGclIwQjhPQUPlPQuBK90vfVmI_yuRrDeC617ofVeaH0U2lduDhWjlPrHeUFpwTD5Axu3caU</recordid><startdate>202301</startdate><enddate>202301</enddate><creator>Chen, Yanjiao</creator><creator>Zhu, Xiaotian</creator><creator>Gong, Xueluan</creator><creator>Yi, Xinjing</creator><creator>Li, Shuyang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-2190-8117</orcidid><orcidid>https://orcid.org/0000-0002-1382-0679</orcidid></search><sort><creationdate>202301</creationdate><title>Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions</title><author>Chen, Yanjiao ; Zhu, Xiaotian ; Gong, Xueluan ; Yi, Xinjing ; Li, Shuyang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-8b20ac0f4bfc716b69c9c17dc57cfb294d1a659de867cc624c2567f09b02a4fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Automobiles</topic><topic>Data models</topic><topic>Data poisoning attacks</topic><topic>Deep learning</topic><topic>deep neural networks</topic><topic>Feature extraction</topic><topic>Internet of Vehicles</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Object recognition</topic><topic>Poisons</topic><topic>Target recognition</topic><topic>Taxonomy</topic><topic>Toxicology</topic><topic>Traffic models</topic><topic>Traffic signs</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yanjiao</creatorcontrib><creatorcontrib>Zhu, Xiaotian</creatorcontrib><creatorcontrib>Gong, Xueluan</creatorcontrib><creatorcontrib>Yi, Xinjing</creatorcontrib><creatorcontrib>Li, Shuyang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on industrial informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Yanjiao</au><au>Zhu, Xiaotian</au><au>Gong, Xueluan</au><au>Yi, Xinjing</au><au>Li, Shuyang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions</atitle><jtitle>IEEE transactions on industrial informatics</jtitle><stitle>TII</stitle><date>2023-01</date><risdate>2023</risdate><volume>19</volume><issue>1</issue><spage>20</spage><epage>28</epage><pages>20-28</pages><issn>1551-3203</issn><eissn>1941-0050</eissn><coden>ITIICH</coden><abstract>With the unprecedented development of deep learning, autonomous vehicles (AVs) have achieved tremendous progress nowadays. However, AV supported by DNN models is vulnerable to data poisoning attacks, hindering the large-scale application of autonomous driving. For example, by injecting carefully designed poisons into the training dataset of the DNN model in the traffic sign recognition system, the attacker can mislead the system to make targeted misclassification or cause a reduction in model classification accuracy. In this article, we conduct a thorough investigation of the state-of-the-art data poisoning attacks and defenses against AVs. According to whether the attacker needs to manipulate the data labeling process, we divide the state-of-the-art attack approaches into two categories, i.e., dirty-label attacks and clean-label attacks. We also differentiate the existing defense methods into two categories based on whether to modify the training data or the models, i.e., data-based defenses and model-based defenses. In addition to a detailed review of attacks and defenses in each category, we also give a qualitative comparison of the existing attacks and defenses. Besides, we provide a quantitative comparison of the existing attack and defense methods through experiments. Last but not least, we pinpoint several future directions for data poisoning attacks and defenses in AVs, providing possible ways for further research.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TII.2022.3198481</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-2190-8117</orcidid><orcidid>https://orcid.org/0000-0002-1382-0679</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1551-3203
ispartof IEEE transactions on industrial informatics, 2023-01, Vol.19 (1), p.20-28
issn 1551-3203
1941-0050
language eng
recordid cdi_crossref_primary_10_1109_TII_2022_3198481
source IEEE Electronic Library (IEL)
subjects Automobiles
Data models
Data poisoning attacks
Deep learning
deep neural networks
Feature extraction
Internet of Vehicles
Machine learning
Model accuracy
Object recognition
Poisons
Target recognition
Taxonomy
Toxicology
Traffic models
Traffic signs
Training
title Data Poisoning Attacks in Internet-of-Vehicle Networks: Taxonomy, State-of-The-Art, and Future Directions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T22%3A13%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Data%20Poisoning%20Attacks%20in%20Internet-of-Vehicle%20Networks:%20Taxonomy,%20State-of-The-Art,%20and%20Future%20Directions&rft.jtitle=IEEE%20transactions%20on%20industrial%20informatics&rft.au=Chen,%20Yanjiao&rft.date=2023-01&rft.volume=19&rft.issue=1&rft.spage=20&rft.epage=28&rft.pages=20-28&rft.issn=1551-3203&rft.eissn=1941-0050&rft.coden=ITIICH&rft_id=info:doi/10.1109/TII.2022.3198481&rft_dat=%3Cproquest_RIE%3E2734387339%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2734387339&rft_id=info:pmid/&rft_ieee_id=9855872&rfr_iscdi=true