Adversarial learning for counterfactual fairness
In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the id...
Gespeichert in:
Veröffentlicht in: | Machine learning 2023-03, Vol.112 (3), p.741-763 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 763 |
---|---|
container_issue | 3 |
container_start_page | 741 |
container_title | Machine learning |
container_volume | 112 |
creator | Grari, Vincent Lamprier, Sylvain Detyniecki, Marcin |
description | In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instance. Existing approaches rely on Variational Auto-encoding of individuals, using Maximum Mean Discrepancy (MMD) penalization to limit the statistical dependence of inferred representations with their corresponding sensitive attributes. This enables the simulation of counterfactual samples used for training the target fair model, the goal being to produce similar outcomes for every alternate version of any individual. In this work, we propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties, and is particularly better fitted for the continuous setting, where values of sensitive attributes cannot be exhaustively enumerated. Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings. |
doi_str_mv | 10.1007/s10994-022-06206-8 |
format | Article |
fullrecord | <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03923289v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2783523808</sourcerecordid><originalsourceid>FETCH-LOGICAL-c397t-ec25960409eabb08bb14c3fcb9f3a07f857289899c606377e40e6640168699e53</originalsourceid><addsrcrecordid>eNp9kE1LxDAQhoMouK7-AU8FTx6ik6T5Oi6LukLBi55DGpO1S23XpF3w35ta0ZungZnnfRkehC4J3BAAeZsIaF1ioBSDoCCwOkILwiXDwAU_RgtQimNBKD9FZyntAIAKJRYIVq8HH5ONjW2L1tvYNd22CH0sXD92g4_BumHMt2Cb2PmUztFJsG3yFz9ziV7u757XG1w9PTyuVxV2TMsBe0e5FlCC9rauQdU1KR0LrtaBWZBBcUmVVlo7AYJJ6UvwQpRA8ldae86W6HrufbOt2cfm3cZP09vGbFaVmXbANGW540AyezWz-9h_jD4NZtePscvvGSoV45QpUJmiM-Vin1L04beWgJksmtmiyRbNt0UzhdgcShnutj7-Vf-T-gL3ZHJ-</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2783523808</pqid></control><display><type>article</type><title>Adversarial learning for counterfactual fairness</title><source>SpringerNature Journals</source><creator>Grari, Vincent ; Lamprier, Sylvain ; Detyniecki, Marcin</creator><creatorcontrib>Grari, Vincent ; Lamprier, Sylvain ; Detyniecki, Marcin</creatorcontrib><description>In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instance. Existing approaches rely on Variational Auto-encoding of individuals, using Maximum Mean Discrepancy (MMD) penalization to limit the statistical dependence of inferred representations with their corresponding sensitive attributes. This enables the simulation of counterfactual samples used for training the target fair model, the goal being to produce similar outcomes for every alternate version of any individual. In this work, we propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties, and is particularly better fitted for the continuous setting, where values of sensitive attributes cannot be exhaustively enumerated. Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-022-06206-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Computer Science ; Control ; Machine Learning ; Mechatronics ; Natural Language Processing (NLP) ; Prediction models ; Robotics ; Simulation and Modeling ; Special Issue on Safe and Fair Machine Learning</subject><ispartof>Machine learning, 2023-03, Vol.112 (3), p.741-763</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2022.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c397t-ec25960409eabb08bb14c3fcb9f3a07f857289899c606377e40e6640168699e53</citedby><cites>FETCH-LOGICAL-c397t-ec25960409eabb08bb14c3fcb9f3a07f857289899c606377e40e6640168699e53</cites><orcidid>0000-0003-4479-3834 ; 0000-0001-5669-4871 ; 0000-0002-2508-922X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10994-022-06206-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10994-022-06206-8$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,41488,42557,51319</link.rule.ids><backlink>$$Uhttps://hal.sorbonne-universite.fr/hal-03923289$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Grari, Vincent</creatorcontrib><creatorcontrib>Lamprier, Sylvain</creatorcontrib><creatorcontrib>Detyniecki, Marcin</creatorcontrib><title>Adversarial learning for counterfactual fairness</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instance. Existing approaches rely on Variational Auto-encoding of individuals, using Maximum Mean Discrepancy (MMD) penalization to limit the statistical dependence of inferred representations with their corresponding sensitive attributes. This enables the simulation of counterfactual samples used for training the target fair model, the goal being to produce similar outcomes for every alternate version of any individual. In this work, we propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties, and is particularly better fitted for the continuous setting, where values of sensitive attributes cannot be exhaustively enumerated. Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Control</subject><subject>Machine Learning</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Prediction models</subject><subject>Robotics</subject><subject>Simulation and Modeling</subject><subject>Special Issue on Safe and Fair Machine Learning</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE1LxDAQhoMouK7-AU8FTx6ik6T5Oi6LukLBi55DGpO1S23XpF3w35ta0ZungZnnfRkehC4J3BAAeZsIaF1ioBSDoCCwOkILwiXDwAU_RgtQimNBKD9FZyntAIAKJRYIVq8HH5ONjW2L1tvYNd22CH0sXD92g4_BumHMt2Cb2PmUztFJsG3yFz9ziV7u757XG1w9PTyuVxV2TMsBe0e5FlCC9rauQdU1KR0LrtaBWZBBcUmVVlo7AYJJ6UvwQpRA8ldae86W6HrufbOt2cfm3cZP09vGbFaVmXbANGW540AyezWz-9h_jD4NZtePscvvGSoV45QpUJmiM-Vin1L04beWgJksmtmiyRbNt0UzhdgcShnutj7-Vf-T-gL3ZHJ-</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Grari, Vincent</creator><creator>Lamprier, Sylvain</creator><creator>Detyniecki, Marcin</creator><general>Springer US</general><general>Springer Nature B.V</general><general>Springer Verlag</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0003-4479-3834</orcidid><orcidid>https://orcid.org/0000-0001-5669-4871</orcidid><orcidid>https://orcid.org/0000-0002-2508-922X</orcidid></search><sort><creationdate>20230301</creationdate><title>Adversarial learning for counterfactual fairness</title><author>Grari, Vincent ; Lamprier, Sylvain ; Detyniecki, Marcin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c397t-ec25960409eabb08bb14c3fcb9f3a07f857289899c606377e40e6640168699e53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Control</topic><topic>Machine Learning</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Prediction models</topic><topic>Robotics</topic><topic>Simulation and Modeling</topic><topic>Special Issue on Safe and Fair Machine Learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Grari, Vincent</creatorcontrib><creatorcontrib>Lamprier, Sylvain</creatorcontrib><creatorcontrib>Detyniecki, Marcin</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Grari, Vincent</au><au>Lamprier, Sylvain</au><au>Detyniecki, Marcin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial learning for counterfactual fairness</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2023-03-01</date><risdate>2023</risdate><volume>112</volume><issue>3</issue><spage>741</spage><epage>763</epage><pages>741-763</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instance. Existing approaches rely on Variational Auto-encoding of individuals, using Maximum Mean Discrepancy (MMD) penalization to limit the statistical dependence of inferred representations with their corresponding sensitive attributes. This enables the simulation of counterfactual samples used for training the target fair model, the goal being to produce similar outcomes for every alternate version of any individual. In this work, we propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties, and is particularly better fitted for the continuous setting, where values of sensitive attributes cannot be exhaustively enumerated. Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10994-022-06206-8</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0003-4479-3834</orcidid><orcidid>https://orcid.org/0000-0001-5669-4871</orcidid><orcidid>https://orcid.org/0000-0002-2508-922X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0885-6125 |
ispartof | Machine learning, 2023-03, Vol.112 (3), p.741-763 |
issn | 0885-6125 1573-0565 |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_03923289v1 |
source | SpringerNature Journals |
subjects | Artificial Intelligence Computer Science Control Machine Learning Mechatronics Natural Language Processing (NLP) Prediction models Robotics Simulation and Modeling Special Issue on Safe and Fair Machine Learning |
title | Adversarial learning for counterfactual fairness |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T17%3A33%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20learning%20for%20counterfactual%20fairness&rft.jtitle=Machine%20learning&rft.au=Grari,%20Vincent&rft.date=2023-03-01&rft.volume=112&rft.issue=3&rft.spage=741&rft.epage=763&rft.pages=741-763&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-022-06206-8&rft_dat=%3Cproquest_hal_p%3E2783523808%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2783523808&rft_id=info:pmid/&rfr_iscdi=true |