Adversarial Attacks on Fairness of Graph Neural Networks

Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e.g., female) in graph-based applications. Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Binchi, Dong, Yushun, Chen, Chen, Zhu, Yada, Luo, Minnan, Li, Jundong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Binchi
Dong, Yushun
Chen, Chen
Zhu, Yada
Luo, Minnan
Li, Jundong
description Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e.g., female) in graph-based applications. Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks. In this paper, we investigate the problem of adversarial attacks on fairness of GNNs and propose G-FairAttack, a general framework for attacking various types of fairness-aware GNNs in terms of fairness with an unnoticeable effect on prediction utility. In addition, we propose a fast computation technique to reduce the time complexity of G-FairAttack. The experimental study demonstrates that G-FairAttack successfully corrupts the fairness of different types of GNNs while keeping the attack unnoticeable. Our study on fairness attacks sheds light on potential vulnerabilities in fairness-aware GNNs and guides further research on the robustness of GNNs in terms of fairness.
doi_str_mv 10.48550/arxiv.2310.13822
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_13822</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_13822</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-333650623152dc8e6347761d6fc68eaa23f5771c4750ff086c149b2e24dbc1583</originalsourceid><addsrcrecordid>eNotT7luAjEUdEMRAR-QKv6BJbafL8oV4oiESEO_enhtseLU84bj79kA1YxGozkY-5RipL0x4hvp1lxGCjpBglfqg_myvkTKSA3uedm2GHaZn458hg0dY-544nPC85av4h91nlVsryfa5QHrJdznOHxjn61n0_VkUSx_5z-TclmgdaoAAGuE7RqNqoOPFrRzVtY2BesjooJknJNBOyNSEt4GqccbFZWuN0EaD3329Yp9Tq_O1ByQ7tX_hep5AR4JeD97</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Adversarial Attacks on Fairness of Graph Neural Networks</title><source>arXiv.org</source><creator>Zhang, Binchi ; Dong, Yushun ; Chen, Chen ; Zhu, Yada ; Luo, Minnan ; Li, Jundong</creator><creatorcontrib>Zhang, Binchi ; Dong, Yushun ; Chen, Chen ; Zhu, Yada ; Luo, Minnan ; Li, Jundong</creatorcontrib><description>Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e.g., female) in graph-based applications. Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks. In this paper, we investigate the problem of adversarial attacks on fairness of GNNs and propose G-FairAttack, a general framework for attacking various types of fairness-aware GNNs in terms of fairness with an unnoticeable effect on prediction utility. In addition, we propose a fast computation technique to reduce the time complexity of G-FairAttack. The experimental study demonstrates that G-FairAttack successfully corrupts the fairness of different types of GNNs while keeping the attack unnoticeable. Our study on fairness attacks sheds light on potential vulnerabilities in fairness-aware GNNs and guides further research on the robustness of GNNs in terms of fairness.</description><identifier>DOI: 10.48550/arxiv.2310.13822</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.13822$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.13822$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Binchi</creatorcontrib><creatorcontrib>Dong, Yushun</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Zhu, Yada</creatorcontrib><creatorcontrib>Luo, Minnan</creatorcontrib><creatorcontrib>Li, Jundong</creatorcontrib><title>Adversarial Attacks on Fairness of Graph Neural Networks</title><description>Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e.g., female) in graph-based applications. Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks. In this paper, we investigate the problem of adversarial attacks on fairness of GNNs and propose G-FairAttack, a general framework for attacking various types of fairness-aware GNNs in terms of fairness with an unnoticeable effect on prediction utility. In addition, we propose a fast computation technique to reduce the time complexity of G-FairAttack. The experimental study demonstrates that G-FairAttack successfully corrupts the fairness of different types of GNNs while keeping the attack unnoticeable. Our study on fairness attacks sheds light on potential vulnerabilities in fairness-aware GNNs and guides further research on the robustness of GNNs in terms of fairness.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotT7luAjEUdEMRAR-QKv6BJbafL8oV4oiESEO_enhtseLU84bj79kA1YxGozkY-5RipL0x4hvp1lxGCjpBglfqg_myvkTKSA3uedm2GHaZn458hg0dY-544nPC85av4h91nlVsryfa5QHrJdznOHxjn61n0_VkUSx_5z-TclmgdaoAAGuE7RqNqoOPFrRzVtY2BesjooJknJNBOyNSEt4GqccbFZWuN0EaD3329Yp9Tq_O1ByQ7tX_hep5AR4JeD97</recordid><startdate>20231020</startdate><enddate>20231020</enddate><creator>Zhang, Binchi</creator><creator>Dong, Yushun</creator><creator>Chen, Chen</creator><creator>Zhu, Yada</creator><creator>Luo, Minnan</creator><creator>Li, Jundong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231020</creationdate><title>Adversarial Attacks on Fairness of Graph Neural Networks</title><author>Zhang, Binchi ; Dong, Yushun ; Chen, Chen ; Zhu, Yada ; Luo, Minnan ; Li, Jundong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-333650623152dc8e6347761d6fc68eaa23f5771c4750ff086c149b2e24dbc1583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Binchi</creatorcontrib><creatorcontrib>Dong, Yushun</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Zhu, Yada</creatorcontrib><creatorcontrib>Luo, Minnan</creatorcontrib><creatorcontrib>Li, Jundong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Binchi</au><au>Dong, Yushun</au><au>Chen, Chen</au><au>Zhu, Yada</au><au>Luo, Minnan</au><au>Li, Jundong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Attacks on Fairness of Graph Neural Networks</atitle><date>2023-10-20</date><risdate>2023</risdate><abstract>Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e.g., female) in graph-based applications. Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks. In this paper, we investigate the problem of adversarial attacks on fairness of GNNs and propose G-FairAttack, a general framework for attacking various types of fairness-aware GNNs in terms of fairness with an unnoticeable effect on prediction utility. In addition, we propose a fast computation technique to reduce the time complexity of G-FairAttack. The experimental study demonstrates that G-FairAttack successfully corrupts the fairness of different types of GNNs while keeping the attack unnoticeable. Our study on fairness attacks sheds light on potential vulnerabilities in fairness-aware GNNs and guides further research on the robustness of GNNs in terms of fairness.</abstract><doi>10.48550/arxiv.2310.13822</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.13822
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_13822
source arXiv.org
subjects Computer Science - Learning
title Adversarial Attacks on Fairness of Graph Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T19%3A35%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Attacks%20on%20Fairness%20of%20Graph%20Neural%20Networks&rft.au=Zhang,%20Binchi&rft.date=2023-10-20&rft_id=info:doi/10.48550/arxiv.2310.13822&rft_dat=%3Carxiv_GOX%3E2310_13822%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true