The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
Most state-of-the-art machine learning (ML) classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Assion, Felix Schlicht, Peter Greßner, Florens Günther, Wiebke Hüger, Fabian Schmidt, Nico Rasheed, Umair |
description | Most state-of-the-art machine learning (ML) classification systems are
vulnerable to adversarial perturbations. As a consequence, adversarial
robustness poses a significant challenge for the deployment of ML-based systems
in safety- and security-critical environments like autonomous driving, disease
detection or unmanned aerial vehicles. In the past years we have seen an
impressive amount of publications presenting more and more new adversarial
attacks. However, the attack research seems to be rather unstructured and new
attacks often appear to be random selections from the unlimited set of possible
adversarial attacks. With this publication, we present a structured analysis of
the adversarial attack creation process. By detecting different building blocks
of adversarial attacks, we outline the road to new sets of adversarial attacks.
We call this the "attack generator". In the pursuit of this objective, we
summarize and extend existing adversarial perturbation taxonomies. The
resulting taxonomy is then linked to the application context of computer vision
systems for autonomous vehicles, i.e. semantic segmentation and object
detection. Finally, in order to prove the usefulness of the attack generator,
we investigate existing semantic segmentation attacks with respect to the
detected defining components of adversarial attacks. |
doi_str_mv | 10.48550/arxiv.1906.07077 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1906_07077</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1906_07077</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-d07e95b2cc886574e276aac227553f78da5d8e29d45c780f69898d34693f11973</originalsourceid><addsrcrecordid>eNotz0FPwyAYgGEuHsz0B3iSP9BKaeEDb6TRabJkB7k3n0AdcWsbwOn-vXHu9N7e5CHkrmF1p4RgD5h-4rFuNJM1AwZwTazdBWpKQfdJ12EKCcucHqmhb6dcwgFLdNQsS5rR7aidvzH5TPt5yiV9uRKnD2r8MaSMKeL-Mso35GrEfQ63l66IfX6y_Uu12a5fe7OpUAJUnkHQ4p07p5QU0AUOEtFxDkK0IyiPwqvAte-EA8VGqZVWvu2kbsem0dCuyP3_9swalhQPmE7DH28489pfb5xKYQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks</title><source>arXiv.org</source><creator>Assion, Felix ; Schlicht, Peter ; Greßner, Florens ; Günther, Wiebke ; Hüger, Fabian ; Schmidt, Nico ; Rasheed, Umair</creator><creatorcontrib>Assion, Felix ; Schlicht, Peter ; Greßner, Florens ; Günther, Wiebke ; Hüger, Fabian ; Schmidt, Nico ; Rasheed, Umair</creatorcontrib><description>Most state-of-the-art machine learning (ML) classification systems are
vulnerable to adversarial perturbations. As a consequence, adversarial
robustness poses a significant challenge for the deployment of ML-based systems
in safety- and security-critical environments like autonomous driving, disease
detection or unmanned aerial vehicles. In the past years we have seen an
impressive amount of publications presenting more and more new adversarial
attacks. However, the attack research seems to be rather unstructured and new
attacks often appear to be random selections from the unlimited set of possible
adversarial attacks. With this publication, we present a structured analysis of
the adversarial attack creation process. By detecting different building blocks
of adversarial attacks, we outline the road to new sets of adversarial attacks.
We call this the "attack generator". In the pursuit of this objective, we
summarize and extend existing adversarial perturbation taxonomies. The
resulting taxonomy is then linked to the application context of computer vision
systems for autonomous vehicles, i.e. semantic segmentation and object
detection. Finally, in order to prove the usefulness of the attack generator,
we investigate existing semantic segmentation attacks with respect to the
detected defining components of adversarial attacks.</description><identifier>DOI: 10.48550/arxiv.1906.07077</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1906.07077$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1906.07077$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Assion, Felix</creatorcontrib><creatorcontrib>Schlicht, Peter</creatorcontrib><creatorcontrib>Greßner, Florens</creatorcontrib><creatorcontrib>Günther, Wiebke</creatorcontrib><creatorcontrib>Hüger, Fabian</creatorcontrib><creatorcontrib>Schmidt, Nico</creatorcontrib><creatorcontrib>Rasheed, Umair</creatorcontrib><title>The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks</title><description>Most state-of-the-art machine learning (ML) classification systems are
vulnerable to adversarial perturbations. As a consequence, adversarial
robustness poses a significant challenge for the deployment of ML-based systems
in safety- and security-critical environments like autonomous driving, disease
detection or unmanned aerial vehicles. In the past years we have seen an
impressive amount of publications presenting more and more new adversarial
attacks. However, the attack research seems to be rather unstructured and new
attacks often appear to be random selections from the unlimited set of possible
adversarial attacks. With this publication, we present a structured analysis of
the adversarial attack creation process. By detecting different building blocks
of adversarial attacks, we outline the road to new sets of adversarial attacks.
We call this the "attack generator". In the pursuit of this objective, we
summarize and extend existing adversarial perturbation taxonomies. The
resulting taxonomy is then linked to the application context of computer vision
systems for autonomous vehicles, i.e. semantic segmentation and object
detection. Finally, in order to prove the usefulness of the attack generator,
we investigate existing semantic segmentation attacks with respect to the
detected defining components of adversarial attacks.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz0FPwyAYgGEuHsz0B3iSP9BKaeEDb6TRabJkB7k3n0AdcWsbwOn-vXHu9N7e5CHkrmF1p4RgD5h-4rFuNJM1AwZwTazdBWpKQfdJ12EKCcucHqmhb6dcwgFLdNQsS5rR7aidvzH5TPt5yiV9uRKnD2r8MaSMKeL-Mso35GrEfQ63l66IfX6y_Uu12a5fe7OpUAJUnkHQ4p07p5QU0AUOEtFxDkK0IyiPwqvAte-EA8VGqZVWvu2kbsem0dCuyP3_9swalhQPmE7DH28489pfb5xKYQ</recordid><startdate>20190617</startdate><enddate>20190617</enddate><creator>Assion, Felix</creator><creator>Schlicht, Peter</creator><creator>Greßner, Florens</creator><creator>Günther, Wiebke</creator><creator>Hüger, Fabian</creator><creator>Schmidt, Nico</creator><creator>Rasheed, Umair</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190617</creationdate><title>The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks</title><author>Assion, Felix ; Schlicht, Peter ; Greßner, Florens ; Günther, Wiebke ; Hüger, Fabian ; Schmidt, Nico ; Rasheed, Umair</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-d07e95b2cc886574e276aac227553f78da5d8e29d45c780f69898d34693f11973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Assion, Felix</creatorcontrib><creatorcontrib>Schlicht, Peter</creatorcontrib><creatorcontrib>Greßner, Florens</creatorcontrib><creatorcontrib>Günther, Wiebke</creatorcontrib><creatorcontrib>Hüger, Fabian</creatorcontrib><creatorcontrib>Schmidt, Nico</creatorcontrib><creatorcontrib>Rasheed, Umair</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Assion, Felix</au><au>Schlicht, Peter</au><au>Greßner, Florens</au><au>Günther, Wiebke</au><au>Hüger, Fabian</au><au>Schmidt, Nico</au><au>Rasheed, Umair</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks</atitle><date>2019-06-17</date><risdate>2019</risdate><abstract>Most state-of-the-art machine learning (ML) classification systems are
vulnerable to adversarial perturbations. As a consequence, adversarial
robustness poses a significant challenge for the deployment of ML-based systems
in safety- and security-critical environments like autonomous driving, disease
detection or unmanned aerial vehicles. In the past years we have seen an
impressive amount of publications presenting more and more new adversarial
attacks. However, the attack research seems to be rather unstructured and new
attacks often appear to be random selections from the unlimited set of possible
adversarial attacks. With this publication, we present a structured analysis of
the adversarial attack creation process. By detecting different building blocks
of adversarial attacks, we outline the road to new sets of adversarial attacks.
We call this the "attack generator". In the pursuit of this objective, we
summarize and extend existing adversarial perturbation taxonomies. The
resulting taxonomy is then linked to the application context of computer vision
systems for autonomous vehicles, i.e. semantic segmentation and object
detection. Finally, in order to prove the usefulness of the attack generator,
we investigate existing semantic segmentation attacks with respect to the
detected defining components of adversarial attacks.</abstract><doi>10.48550/arxiv.1906.07077</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1906.07077 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1906_07077 |
source | arXiv.org |
subjects | Computer Science - Cryptography and Security Computer Science - Learning Statistics - Machine Learning |
title | The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T13%3A33%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Attack%20Generator:%20A%20Systematic%20Approach%20Towards%20Constructing%20Adversarial%20Attacks&rft.au=Assion,%20Felix&rft.date=2019-06-17&rft_id=info:doi/10.48550/arxiv.1906.07077&rft_dat=%3Carxiv_GOX%3E1906_07077%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |