Fairness for AUC via Feature Augmentation

We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic. AUC is commonly used to measure the performance of prediction models. The same classifier can have significantly varying AUCs for different...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fong, Hortense, Kumar, Vineet, Mehrotra, Anay, Vishnoi, Nisheeth K
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Fong, Hortense
Kumar, Vineet
Mehrotra, Anay
Vishnoi, Nisheeth K
description We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic. AUC is commonly used to measure the performance of prediction models. The same classifier can have significantly varying AUCs for different protected groups and, in real-world applications, it is often desirable to reduce such cross-group differences. We address the problem of how to acquire additional features to most greatly improve AUC for the disadvantaged group. We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups. The approach requires only a few summary statistics to offer provable guarantees on AUC improvement, and allows managers flexibility in determining where in the fairness-accuracy tradeoff they would like to be. We evaluate fairAUC on synthetic and real-world datasets and find that it significantly improves AUC for the disadvantaged group relative to benchmarks maximizing overall AUC and minimizing bias between groups.
doi_str_mv 10.48550/arxiv.2111.12823
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2111_12823</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2111_12823</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-5f77685dcfe2b56901f16ea7ccd4cc6b8b67471f7d02f175590584f42bca9d233</originalsourceid><addsrcrecordid>eNotzrtOwzAUgGEvHVDhAZjw2iHBx_eMUdQAUqUuYY5OHLuyRJPKuai8PSIw_duvj5BnYLm0SrFXTPe45hwAcuCWiwdyqDGmwU8TDWOi5WdF14i09jgvydNyuVz9MOMcx-GR7AJ-Tf7pv3vS1Memes9O57ePqjxlqI3IVDBGW9W74HmndMEggPZonOulc7qznTbSQDA94wGMUgVTVgbJO4dFz4XYk5e_7WZtbyleMX23v-Z2M4sfyV86ow</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Fairness for AUC via Feature Augmentation</title><source>arXiv.org</source><creator>Fong, Hortense ; Kumar, Vineet ; Mehrotra, Anay ; Vishnoi, Nisheeth K</creator><creatorcontrib>Fong, Hortense ; Kumar, Vineet ; Mehrotra, Anay ; Vishnoi, Nisheeth K</creatorcontrib><description>We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic. AUC is commonly used to measure the performance of prediction models. The same classifier can have significantly varying AUCs for different protected groups and, in real-world applications, it is often desirable to reduce such cross-group differences. We address the problem of how to acquire additional features to most greatly improve AUC for the disadvantaged group. We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups. The approach requires only a few summary statistics to offer provable guarantees on AUC improvement, and allows managers flexibility in determining where in the fairness-accuracy tradeoff they would like to be. We evaluate fairAUC on synthetic and real-world datasets and find that it significantly improves AUC for the disadvantaged group relative to benchmarks maximizing overall AUC and minimizing bias between groups.</description><identifier>DOI: 10.48550/arxiv.2111.12823</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computers and Society ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2021-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2111.12823$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.12823$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fong, Hortense</creatorcontrib><creatorcontrib>Kumar, Vineet</creatorcontrib><creatorcontrib>Mehrotra, Anay</creatorcontrib><creatorcontrib>Vishnoi, Nisheeth K</creatorcontrib><title>Fairness for AUC via Feature Augmentation</title><description>We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic. AUC is commonly used to measure the performance of prediction models. The same classifier can have significantly varying AUCs for different protected groups and, in real-world applications, it is often desirable to reduce such cross-group differences. We address the problem of how to acquire additional features to most greatly improve AUC for the disadvantaged group. We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups. The approach requires only a few summary statistics to offer provable guarantees on AUC improvement, and allows managers flexibility in determining where in the fairness-accuracy tradeoff they would like to be. We evaluate fairAUC on synthetic and real-world datasets and find that it significantly improves AUC for the disadvantaged group relative to benchmarks maximizing overall AUC and minimizing bias between groups.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrtOwzAUgGEvHVDhAZjw2iHBx_eMUdQAUqUuYY5OHLuyRJPKuai8PSIw_duvj5BnYLm0SrFXTPe45hwAcuCWiwdyqDGmwU8TDWOi5WdF14i09jgvydNyuVz9MOMcx-GR7AJ-Tf7pv3vS1Memes9O57ePqjxlqI3IVDBGW9W74HmndMEggPZonOulc7qznTbSQDA94wGMUgVTVgbJO4dFz4XYk5e_7WZtbyleMX23v-Z2M4sfyV86ow</recordid><startdate>20211124</startdate><enddate>20211124</enddate><creator>Fong, Hortense</creator><creator>Kumar, Vineet</creator><creator>Mehrotra, Anay</creator><creator>Vishnoi, Nisheeth K</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20211124</creationdate><title>Fairness for AUC via Feature Augmentation</title><author>Fong, Hortense ; Kumar, Vineet ; Mehrotra, Anay ; Vishnoi, Nisheeth K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-5f77685dcfe2b56901f16ea7ccd4cc6b8b67471f7d02f175590584f42bca9d233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Fong, Hortense</creatorcontrib><creatorcontrib>Kumar, Vineet</creatorcontrib><creatorcontrib>Mehrotra, Anay</creatorcontrib><creatorcontrib>Vishnoi, Nisheeth K</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fong, Hortense</au><au>Kumar, Vineet</au><au>Mehrotra, Anay</au><au>Vishnoi, Nisheeth K</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fairness for AUC via Feature Augmentation</atitle><date>2021-11-24</date><risdate>2021</risdate><abstract>We study fairness in the context of classification where the performance is measured by the area under the curve (AUC) of the receiver operating characteristic. AUC is commonly used to measure the performance of prediction models. The same classifier can have significantly varying AUCs for different protected groups and, in real-world applications, it is often desirable to reduce such cross-group differences. We address the problem of how to acquire additional features to most greatly improve AUC for the disadvantaged group. We develop a novel approach, fairAUC, based on feature augmentation (adding features) to mitigate bias between identifiable groups. The approach requires only a few summary statistics to offer provable guarantees on AUC improvement, and allows managers flexibility in determining where in the fairness-accuracy tradeoff they would like to be. We evaluate fairAUC on synthetic and real-world datasets and find that it significantly improves AUC for the disadvantaged group relative to benchmarks maximizing overall AUC and minimizing bias between groups.</abstract><doi>10.48550/arxiv.2111.12823</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2111.12823
ispartof
issn
language eng
recordid cdi_arxiv_primary_2111_12823
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computers and Society
Computer Science - Learning
Statistics - Machine Learning
title Fairness for AUC via Feature Augmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T03%3A04%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fairness%20for%20AUC%20via%20Feature%20Augmentation&rft.au=Fong,%20Hortense&rft.date=2021-11-24&rft_id=info:doi/10.48550/arxiv.2111.12823&rft_dat=%3Carxiv_GOX%3E2111_12823%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true