Lookahead Counterfactual Fairness
As machine learning (ML) algorithms are used in applications that involve humans, concerns have arisen that these algorithms may be biased against certain social groups. \textit{Counterfactual fairness} (CF) is a fairness notion proposed in Kusner et al. (2017) that measures the unfairness of ML pre...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zuo, Zhiqun Xie, Tian Tan, Xuwei Zhang, Xueru Khalili, Mohammad Mahdi |
description | As machine learning (ML) algorithms are used in applications that involve
humans, concerns have arisen that these algorithms may be biased against
certain social groups. \textit{Counterfactual fairness} (CF) is a fairness
notion proposed in Kusner et al. (2017) that measures the unfairness of ML
predictions; it requires that the prediction perceived by an individual in the
real world has the same marginal distribution as it would be in a
counterfactual world, in which the individual belongs to a different group.
Although CF ensures fair ML predictions, it fails to consider the downstream
effects of ML predictions on individuals. Since humans are strategic and often
adapt their behaviors in response to the ML system, predictions that satisfy CF
may not lead to a fair future outcome for the individuals. In this paper, we
introduce \textit{lookahead counterfactual fairness} (LCF), a fairness notion
accounting for the downstream effects of ML models which requires the
individual \textit{future status} to be counterfactually fair. We theoretically
identify conditions under which LCF can be satisfied and propose an algorithm
based on the theorems. We also extend the concept to path-dependent fairness.
Experiments on both synthetic and real data validate the proposed method. |
doi_str_mv | 10.48550/arxiv.2412.01065 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_01065</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_01065</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_010653</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwNDAz5WRQ9MnPz07MSE1MUXDOL80rSS1KS0wuKU3MUXBLzCzKSy0u5mFgTUvMKU7lhdLcDPJuriHOHrpgw-ILijJzE4sq40GGxoMNNSasAgA_dytX</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Lookahead Counterfactual Fairness</title><source>arXiv.org</source><creator>Zuo, Zhiqun ; Xie, Tian ; Tan, Xuwei ; Zhang, Xueru ; Khalili, Mohammad Mahdi</creator><creatorcontrib>Zuo, Zhiqun ; Xie, Tian ; Tan, Xuwei ; Zhang, Xueru ; Khalili, Mohammad Mahdi</creatorcontrib><description>As machine learning (ML) algorithms are used in applications that involve
humans, concerns have arisen that these algorithms may be biased against
certain social groups. \textit{Counterfactual fairness} (CF) is a fairness
notion proposed in Kusner et al. (2017) that measures the unfairness of ML
predictions; it requires that the prediction perceived by an individual in the
real world has the same marginal distribution as it would be in a
counterfactual world, in which the individual belongs to a different group.
Although CF ensures fair ML predictions, it fails to consider the downstream
effects of ML predictions on individuals. Since humans are strategic and often
adapt their behaviors in response to the ML system, predictions that satisfy CF
may not lead to a fair future outcome for the individuals. In this paper, we
introduce \textit{lookahead counterfactual fairness} (LCF), a fairness notion
accounting for the downstream effects of ML models which requires the
individual \textit{future status} to be counterfactually fair. We theoretically
identify conditions under which LCF can be satisfied and propose an algorithm
based on the theorems. We also extend the concept to path-dependent fairness.
Experiments on both synthetic and real data validate the proposed method.</description><identifier>DOI: 10.48550/arxiv.2412.01065</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2024-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.01065$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.01065$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zuo, Zhiqun</creatorcontrib><creatorcontrib>Xie, Tian</creatorcontrib><creatorcontrib>Tan, Xuwei</creatorcontrib><creatorcontrib>Zhang, Xueru</creatorcontrib><creatorcontrib>Khalili, Mohammad Mahdi</creatorcontrib><title>Lookahead Counterfactual Fairness</title><description>As machine learning (ML) algorithms are used in applications that involve
humans, concerns have arisen that these algorithms may be biased against
certain social groups. \textit{Counterfactual fairness} (CF) is a fairness
notion proposed in Kusner et al. (2017) that measures the unfairness of ML
predictions; it requires that the prediction perceived by an individual in the
real world has the same marginal distribution as it would be in a
counterfactual world, in which the individual belongs to a different group.
Although CF ensures fair ML predictions, it fails to consider the downstream
effects of ML predictions on individuals. Since humans are strategic and often
adapt their behaviors in response to the ML system, predictions that satisfy CF
may not lead to a fair future outcome for the individuals. In this paper, we
introduce \textit{lookahead counterfactual fairness} (LCF), a fairness notion
accounting for the downstream effects of ML models which requires the
individual \textit{future status} to be counterfactually fair. We theoretically
identify conditions under which LCF can be satisfied and propose an algorithm
based on the theorems. We also extend the concept to path-dependent fairness.
Experiments on both synthetic and real data validate the proposed method.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwNDAz5WRQ9MnPz07MSE1MUXDOL80rSS1KS0wuKU3MUXBLzCzKSy0u5mFgTUvMKU7lhdLcDPJuriHOHrpgw-ILijJzE4sq40GGxoMNNSasAgA_dytX</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Zuo, Zhiqun</creator><creator>Xie, Tian</creator><creator>Tan, Xuwei</creator><creator>Zhang, Xueru</creator><creator>Khalili, Mohammad Mahdi</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20241201</creationdate><title>Lookahead Counterfactual Fairness</title><author>Zuo, Zhiqun ; Xie, Tian ; Tan, Xuwei ; Zhang, Xueru ; Khalili, Mohammad Mahdi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_010653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zuo, Zhiqun</creatorcontrib><creatorcontrib>Xie, Tian</creatorcontrib><creatorcontrib>Tan, Xuwei</creatorcontrib><creatorcontrib>Zhang, Xueru</creatorcontrib><creatorcontrib>Khalili, Mohammad Mahdi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zuo, Zhiqun</au><au>Xie, Tian</au><au>Tan, Xuwei</au><au>Zhang, Xueru</au><au>Khalili, Mohammad Mahdi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Lookahead Counterfactual Fairness</atitle><date>2024-12-01</date><risdate>2024</risdate><abstract>As machine learning (ML) algorithms are used in applications that involve
humans, concerns have arisen that these algorithms may be biased against
certain social groups. \textit{Counterfactual fairness} (CF) is a fairness
notion proposed in Kusner et al. (2017) that measures the unfairness of ML
predictions; it requires that the prediction perceived by an individual in the
real world has the same marginal distribution as it would be in a
counterfactual world, in which the individual belongs to a different group.
Although CF ensures fair ML predictions, it fails to consider the downstream
effects of ML predictions on individuals. Since humans are strategic and often
adapt their behaviors in response to the ML system, predictions that satisfy CF
may not lead to a fair future outcome for the individuals. In this paper, we
introduce \textit{lookahead counterfactual fairness} (LCF), a fairness notion
accounting for the downstream effects of ML models which requires the
individual \textit{future status} to be counterfactually fair. We theoretically
identify conditions under which LCF can be satisfied and propose an algorithm
based on the theorems. We also extend the concept to path-dependent fairness.
Experiments on both synthetic and real data validate the proposed method.</abstract><doi>10.48550/arxiv.2412.01065</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2412.01065 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2412_01065 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning Statistics - Machine Learning |
title | Lookahead Counterfactual Fairness |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T22%3A24%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Lookahead%20Counterfactual%20Fairness&rft.au=Zuo,%20Zhiqun&rft.date=2024-12-01&rft_id=info:doi/10.48550/arxiv.2412.01065&rft_dat=%3Carxiv_GOX%3E2412_01065%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |