Dealing with the unevenness: deeper insights in graph-based attack and defense

Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successful...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning 2024-05, Vol.113 (5), p.2921-2953
Hauptverfasser: Zhan, Haoxi, Pei, Xiaobing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2953
container_issue 5
container_start_page 2921
container_title Machine learning
container_volume 113
creator Zhan, Haoxi
Pei, Xiaobing
description Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs’ performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs’ performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks.
doi_str_mv 10.1007/s10994-022-06234-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3049978193</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3049978193</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-362e0f0ef790b3ab04c576c315101b1cad6f579b55de60e570831dce657e5cdf3</originalsourceid><addsrcrecordid>eNp9kD1PwzAURS0EEqXwB5gsMRue49hO2FChgFTBArPlOC8fpbjBdkH8ewJBYmN6dzj3PukQcsrhnAPoi8ihLHMGWcZAZSJn-R6ZcakFA6nkPplBUUimeCYPyVGMawDIVKFm5OEa7ab3Lf3oU0dTh3Tn8R29xxgvaY04YKC9j33bpTgG2gY7dKyyEWtqU7LuhVpfj2SDPuIxOWjsJuLJ752T5-XN0-KOrR5v7xdXK-aEEokJlSE0gI0uoRK2gtxJrZzgkgOvuLO1aqQuKylrVIBSQyF47VBJjdLVjZiTs2l3CNu3HcZk1ttd8ONLIyAvS13wUoxUNlEubGMM2Jgh9K82fBoO5tubmbyZ0Zv58WbysSSmUhxh32L4m_6n9QXFlHAS</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3049978193</pqid></control><display><type>article</type><title>Dealing with the unevenness: deeper insights in graph-based attack and defense</title><source>Springer Nature - Complete Springer Journals</source><creator>Zhan, Haoxi ; Pei, Xiaobing</creator><creatorcontrib>Zhan, Haoxi ; Pei, Xiaobing</creatorcontrib><description>Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs’ performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs’ performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-022-06234-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Case studies ; Cognitive tasks ; Computer Science ; Control ; Deep learning ; Defense ; Fourier transforms ; Graph neural networks ; Graph representations ; Machine Learning ; Mechatronics ; Methods ; Natural Language Processing (NLP) ; Neural networks ; Performance degradation ; Perturbation ; Robotics ; Scholarly publishing ; Simulation and Modeling ; Special Issue on Safe and Fair Machine Learning ; Unevenness</subject><ispartof>Machine learning, 2024-05, Vol.113 (5), p.2921-2953</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-362e0f0ef790b3ab04c576c315101b1cad6f579b55de60e570831dce657e5cdf3</citedby><cites>FETCH-LOGICAL-c363t-362e0f0ef790b3ab04c576c315101b1cad6f579b55de60e570831dce657e5cdf3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10994-022-06234-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10994-022-06234-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Zhan, Haoxi</creatorcontrib><creatorcontrib>Pei, Xiaobing</creatorcontrib><title>Dealing with the unevenness: deeper insights in graph-based attack and defense</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs’ performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs’ performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Case studies</subject><subject>Cognitive tasks</subject><subject>Computer Science</subject><subject>Control</subject><subject>Deep learning</subject><subject>Defense</subject><subject>Fourier transforms</subject><subject>Graph neural networks</subject><subject>Graph representations</subject><subject>Machine Learning</subject><subject>Mechatronics</subject><subject>Methods</subject><subject>Natural Language Processing (NLP)</subject><subject>Neural networks</subject><subject>Performance degradation</subject><subject>Perturbation</subject><subject>Robotics</subject><subject>Scholarly publishing</subject><subject>Simulation and Modeling</subject><subject>Special Issue on Safe and Fair Machine Learning</subject><subject>Unevenness</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAURS0EEqXwB5gsMRue49hO2FChgFTBArPlOC8fpbjBdkH8ewJBYmN6dzj3PukQcsrhnAPoi8ihLHMGWcZAZSJn-R6ZcakFA6nkPplBUUimeCYPyVGMawDIVKFm5OEa7ab3Lf3oU0dTh3Tn8R29xxgvaY04YKC9j33bpTgG2gY7dKyyEWtqU7LuhVpfj2SDPuIxOWjsJuLJ752T5-XN0-KOrR5v7xdXK-aEEokJlSE0gI0uoRK2gtxJrZzgkgOvuLO1aqQuKylrVIBSQyF47VBJjdLVjZiTs2l3CNu3HcZk1ttd8ONLIyAvS13wUoxUNlEubGMM2Jgh9K82fBoO5tubmbyZ0Zv58WbysSSmUhxh32L4m_6n9QXFlHAS</recordid><startdate>20240501</startdate><enddate>20240501</enddate><creator>Zhan, Haoxi</creator><creator>Pei, Xiaobing</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20240501</creationdate><title>Dealing with the unevenness: deeper insights in graph-based attack and defense</title><author>Zhan, Haoxi ; Pei, Xiaobing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-362e0f0ef790b3ab04c576c315101b1cad6f579b55de60e570831dce657e5cdf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Case studies</topic><topic>Cognitive tasks</topic><topic>Computer Science</topic><topic>Control</topic><topic>Deep learning</topic><topic>Defense</topic><topic>Fourier transforms</topic><topic>Graph neural networks</topic><topic>Graph representations</topic><topic>Machine Learning</topic><topic>Mechatronics</topic><topic>Methods</topic><topic>Natural Language Processing (NLP)</topic><topic>Neural networks</topic><topic>Performance degradation</topic><topic>Perturbation</topic><topic>Robotics</topic><topic>Scholarly publishing</topic><topic>Simulation and Modeling</topic><topic>Special Issue on Safe and Fair Machine Learning</topic><topic>Unevenness</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhan, Haoxi</creatorcontrib><creatorcontrib>Pei, Xiaobing</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhan, Haoxi</au><au>Pei, Xiaobing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dealing with the unevenness: deeper insights in graph-based attack and defense</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2024-05-01</date><risdate>2024</risdate><volume>113</volume><issue>5</issue><spage>2921</spage><epage>2953</epage><pages>2921-2953</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs’ performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs’ performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10994-022-06234-4</doi><tpages>33</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0885-6125
ispartof Machine learning, 2024-05, Vol.113 (5), p.2921-2953
issn 0885-6125
1573-0565
language eng
recordid cdi_proquest_journals_3049978193
source Springer Nature - Complete Springer Journals
subjects Algorithms
Artificial Intelligence
Case studies
Cognitive tasks
Computer Science
Control
Deep learning
Defense
Fourier transforms
Graph neural networks
Graph representations
Machine Learning
Mechatronics
Methods
Natural Language Processing (NLP)
Neural networks
Performance degradation
Perturbation
Robotics
Scholarly publishing
Simulation and Modeling
Special Issue on Safe and Fair Machine Learning
Unevenness
title Dealing with the unevenness: deeper insights in graph-based attack and defense
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T18%3A50%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dealing%20with%20the%20unevenness:%20deeper%20insights%20in%20graph-based%20attack%20and%20defense&rft.jtitle=Machine%20learning&rft.au=Zhan,%20Haoxi&rft.date=2024-05-01&rft.volume=113&rft.issue=5&rft.spage=2921&rft.epage=2953&rft.pages=2921-2953&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-022-06234-4&rft_dat=%3Cproquest_cross%3E3049978193%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3049978193&rft_id=info:pmid/&rfr_iscdi=true