Drop edges and adapt: A fairness enforcing fine-tuning for graph neural networks

The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2023-10, Vol.167, p.159-167
Hauptverfasser: Spinelli, Indro, Bianchini, Riccardo, Scardapane, Simone
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 167
container_issue
container_start_page 159
container_title Neural networks
container_volume 167
creator Spinelli, Indro
Bianchini, Riccardo
Scardapane, Simone
description The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavouring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model’s parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new ‘fair’ adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.
doi_str_mv 10.1016/j.neunet.2023.08.002
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2860407714</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608023004215</els_id><sourcerecordid>2860407714</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-2b9ea71ae7cd858b82d47e2fa859ec78668c6a97897f7cca20a982b8b4b27d03</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqXwBhx85JKwcdJ4wwGpKr9SJTj0bjnOprikTrATEG9PSjhzmjnMzGo_xi4TiBNI8utd7Ghw1McCRBoDxgDiiM0SlEUkJIpjNgMs0igHhFN2FsIOAHLM0hl7vfNtx6naUuDaVVxXuutv-JLX2npHIXBydeuNdVteW0dRP7hf33q-9bp74-Npr5tR-q_Wv4dzdlLrJtDFn87Z5uF-s3qK1i-Pz6vlOjJpmvWRKAvSMtEkTYULLFFUmSRRa1wUZCTmOZpcFxILWUtjtABdoCixzEohK0jn7Gqa7Xz7MVDo1d4GQ02jHbVDUAJzyEDKJBuj2RQ1vg3BU606b_faf6sE1IGf2qmJnzrwU4Bq5DfWbqcajV98WvIqGEvOUGU9mV5Vrf1_4ActqXuX</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2860407714</pqid></control><display><type>article</type><title>Drop edges and adapt: A fairness enforcing fine-tuning for graph neural networks</title><source>ScienceDirect Journals (5 years ago - present)</source><creator>Spinelli, Indro ; Bianchini, Riccardo ; Scardapane, Simone</creator><creatorcontrib>Spinelli, Indro ; Bianchini, Riccardo ; Scardapane, Simone</creatorcontrib><description>The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavouring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model’s parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new ‘fair’ adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2023.08.002</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Fairness ; Graph neural network ; Link prediction</subject><ispartof>Neural networks, 2023-10, Vol.167, p.159-167</ispartof><rights>2023 The Author(s)</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c334t-2b9ea71ae7cd858b82d47e2fa859ec78668c6a97897f7cca20a982b8b4b27d03</cites><orcidid>0000-0003-1963-3548 ; 0000-0003-0881-8344 ; 0000-0003-0265-1141</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2023.08.002$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3549,27923,27924,45994</link.rule.ids></links><search><creatorcontrib>Spinelli, Indro</creatorcontrib><creatorcontrib>Bianchini, Riccardo</creatorcontrib><creatorcontrib>Scardapane, Simone</creatorcontrib><title>Drop edges and adapt: A fairness enforcing fine-tuning for graph neural networks</title><title>Neural networks</title><description>The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavouring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model’s parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new ‘fair’ adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.</description><subject>Fairness</subject><subject>Graph neural network</subject><subject>Link prediction</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kM1OwzAQhC0EEqXwBhx85JKwcdJ4wwGpKr9SJTj0bjnOprikTrATEG9PSjhzmjnMzGo_xi4TiBNI8utd7Ghw1McCRBoDxgDiiM0SlEUkJIpjNgMs0igHhFN2FsIOAHLM0hl7vfNtx6naUuDaVVxXuutv-JLX2npHIXBydeuNdVteW0dRP7hf33q-9bp74-Npr5tR-q_Wv4dzdlLrJtDFn87Z5uF-s3qK1i-Pz6vlOjJpmvWRKAvSMtEkTYULLFFUmSRRa1wUZCTmOZpcFxILWUtjtABdoCixzEohK0jn7Gqa7Xz7MVDo1d4GQ02jHbVDUAJzyEDKJBuj2RQ1vg3BU606b_faf6sE1IGf2qmJnzrwU4Bq5DfWbqcajV98WvIqGEvOUGU9mV5Vrf1_4ActqXuX</recordid><startdate>202310</startdate><enddate>202310</enddate><creator>Spinelli, Indro</creator><creator>Bianchini, Riccardo</creator><creator>Scardapane, Simone</creator><general>Elsevier Ltd</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-1963-3548</orcidid><orcidid>https://orcid.org/0000-0003-0881-8344</orcidid><orcidid>https://orcid.org/0000-0003-0265-1141</orcidid></search><sort><creationdate>202310</creationdate><title>Drop edges and adapt: A fairness enforcing fine-tuning for graph neural networks</title><author>Spinelli, Indro ; Bianchini, Riccardo ; Scardapane, Simone</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-2b9ea71ae7cd858b82d47e2fa859ec78668c6a97897f7cca20a982b8b4b27d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Fairness</topic><topic>Graph neural network</topic><topic>Link prediction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Spinelli, Indro</creatorcontrib><creatorcontrib>Bianchini, Riccardo</creatorcontrib><creatorcontrib>Scardapane, Simone</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Spinelli, Indro</au><au>Bianchini, Riccardo</au><au>Scardapane, Simone</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Drop edges and adapt: A fairness enforcing fine-tuning for graph neural networks</atitle><jtitle>Neural networks</jtitle><date>2023-10</date><risdate>2023</risdate><volume>167</volume><spage>159</spage><epage>167</epage><pages>159-167</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavouring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model’s parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new ‘fair’ adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.neunet.2023.08.002</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-1963-3548</orcidid><orcidid>https://orcid.org/0000-0003-0881-8344</orcidid><orcidid>https://orcid.org/0000-0003-0265-1141</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2023-10, Vol.167, p.159-167
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2860407714
source ScienceDirect Journals (5 years ago - present)
subjects Fairness
Graph neural network
Link prediction
title Drop edges and adapt: A fairness enforcing fine-tuning for graph neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T01%3A39%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Drop%20edges%20and%20adapt:%20A%20fairness%20enforcing%20fine-tuning%20for%20graph%20neural%20networks&rft.jtitle=Neural%20networks&rft.au=Spinelli,%20Indro&rft.date=2023-10&rft.volume=167&rft.spage=159&rft.epage=167&rft.pages=159-167&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2023.08.002&rft_dat=%3Cproquest_cross%3E2860407714%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2860407714&rft_id=info:pmid/&rft_els_id=S0893608023004215&rfr_iscdi=true