FairXGBoost: Fairness-aware Classification in XGBoost

Highly regulated domains such as finance have long favoured the use of machine learning algorithms that are scalable, transparent, robust and yield better performance. One of the most prominent examples of such an algorithm is XGBoost. Meanwhile, there is also a growing interest in building fair and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ravichandran, Srinivasan, Khurana, Drona, Venkatesh, Bharath, Edakunni, Narayanan Unny
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ravichandran, Srinivasan
Khurana, Drona
Venkatesh, Bharath
Edakunni, Narayanan Unny
description Highly regulated domains such as finance have long favoured the use of machine learning algorithms that are scalable, transparent, robust and yield better performance. One of the most prominent examples of such an algorithm is XGBoost. Meanwhile, there is also a growing interest in building fair and unbiased models in these regulated domains and numerous bias-mitigation algorithms have been proposed to this end. However, most of these bias-mitigation methods are restricted to specific model families such as logistic regression or support vector machine models, thus leaving modelers with a difficult decision of choosing between fairness from the bias-mitigation algorithms and scalability, transparency, performance from algorithms such as XGBoost. We aim to leverage the best of both worlds by proposing a fair variant of XGBoost that enjoys all the advantages of XGBoost, while also matching the levels of fairness from the state-of-the-art bias-mitigation algorithms. Furthermore, the proposed solution requires very little in terms of changes to the original XGBoost library, thus making it easy for adoption. We provide an empirical analysis of our proposed method on standard benchmark datasets used in the fairness community.
doi_str_mv 10.48550/arxiv.2009.01442
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2009_01442</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2009_01442</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-428622a0f132ec65b4f0ff486a96453c2c77d97fd648961f3b6ffe9ca085d9c63</originalsourceid><addsrcrecordid>eNotzkuLwjAUBeBsZjE4_oBZ2T_QmuZxm7jT4gsENy7clWuaC4HaSlLm8e_Fx-pw4HD4GPsueaGM1nyO8S_8FIJzW_BSKfHJ9AZDPG9Xw5DGRfYovU8px1-MPqs7TClQcDiGoc9Cn72XX-yDsEt--s4JO23Wp3qXH47bfb085AiVyJUwIARyKqXwDvRFESdSBtCC0tIJV1WtragFZSyUJC9A5K1DbnRrHcgJm71un-7mFsMV43_z8DdPv7wDRt8_HQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FairXGBoost: Fairness-aware Classification in XGBoost</title><source>arXiv.org</source><creator>Ravichandran, Srinivasan ; Khurana, Drona ; Venkatesh, Bharath ; Edakunni, Narayanan Unny</creator><creatorcontrib>Ravichandran, Srinivasan ; Khurana, Drona ; Venkatesh, Bharath ; Edakunni, Narayanan Unny</creatorcontrib><description>Highly regulated domains such as finance have long favoured the use of machine learning algorithms that are scalable, transparent, robust and yield better performance. One of the most prominent examples of such an algorithm is XGBoost. Meanwhile, there is also a growing interest in building fair and unbiased models in these regulated domains and numerous bias-mitigation algorithms have been proposed to this end. However, most of these bias-mitigation methods are restricted to specific model families such as logistic regression or support vector machine models, thus leaving modelers with a difficult decision of choosing between fairness from the bias-mitigation algorithms and scalability, transparency, performance from algorithms such as XGBoost. We aim to leverage the best of both worlds by proposing a fair variant of XGBoost that enjoys all the advantages of XGBoost, while also matching the levels of fairness from the state-of-the-art bias-mitigation algorithms. Furthermore, the proposed solution requires very little in terms of changes to the original XGBoost library, thus making it easy for adoption. We provide an empirical analysis of our proposed method on standard benchmark datasets used in the fairness community.</description><identifier>DOI: 10.48550/arxiv.2009.01442</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2020-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2009.01442$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2009.01442$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ravichandran, Srinivasan</creatorcontrib><creatorcontrib>Khurana, Drona</creatorcontrib><creatorcontrib>Venkatesh, Bharath</creatorcontrib><creatorcontrib>Edakunni, Narayanan Unny</creatorcontrib><title>FairXGBoost: Fairness-aware Classification in XGBoost</title><description>Highly regulated domains such as finance have long favoured the use of machine learning algorithms that are scalable, transparent, robust and yield better performance. One of the most prominent examples of such an algorithm is XGBoost. Meanwhile, there is also a growing interest in building fair and unbiased models in these regulated domains and numerous bias-mitigation algorithms have been proposed to this end. However, most of these bias-mitigation methods are restricted to specific model families such as logistic regression or support vector machine models, thus leaving modelers with a difficult decision of choosing between fairness from the bias-mitigation algorithms and scalability, transparency, performance from algorithms such as XGBoost. We aim to leverage the best of both worlds by proposing a fair variant of XGBoost that enjoys all the advantages of XGBoost, while also matching the levels of fairness from the state-of-the-art bias-mitigation algorithms. Furthermore, the proposed solution requires very little in terms of changes to the original XGBoost library, thus making it easy for adoption. We provide an empirical analysis of our proposed method on standard benchmark datasets used in the fairness community.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzkuLwjAUBeBsZjE4_oBZ2T_QmuZxm7jT4gsENy7clWuaC4HaSlLm8e_Fx-pw4HD4GPsueaGM1nyO8S_8FIJzW_BSKfHJ9AZDPG9Xw5DGRfYovU8px1-MPqs7TClQcDiGoc9Cn72XX-yDsEt--s4JO23Wp3qXH47bfb085AiVyJUwIARyKqXwDvRFESdSBtCC0tIJV1WtragFZSyUJC9A5K1DbnRrHcgJm71un-7mFsMV43_z8DdPv7wDRt8_HQ</recordid><startdate>20200903</startdate><enddate>20200903</enddate><creator>Ravichandran, Srinivasan</creator><creator>Khurana, Drona</creator><creator>Venkatesh, Bharath</creator><creator>Edakunni, Narayanan Unny</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200903</creationdate><title>FairXGBoost: Fairness-aware Classification in XGBoost</title><author>Ravichandran, Srinivasan ; Khurana, Drona ; Venkatesh, Bharath ; Edakunni, Narayanan Unny</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-428622a0f132ec65b4f0ff486a96453c2c77d97fd648961f3b6ffe9ca085d9c63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Ravichandran, Srinivasan</creatorcontrib><creatorcontrib>Khurana, Drona</creatorcontrib><creatorcontrib>Venkatesh, Bharath</creatorcontrib><creatorcontrib>Edakunni, Narayanan Unny</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ravichandran, Srinivasan</au><au>Khurana, Drona</au><au>Venkatesh, Bharath</au><au>Edakunni, Narayanan Unny</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FairXGBoost: Fairness-aware Classification in XGBoost</atitle><date>2020-09-03</date><risdate>2020</risdate><abstract>Highly regulated domains such as finance have long favoured the use of machine learning algorithms that are scalable, transparent, robust and yield better performance. One of the most prominent examples of such an algorithm is XGBoost. Meanwhile, there is also a growing interest in building fair and unbiased models in these regulated domains and numerous bias-mitigation algorithms have been proposed to this end. However, most of these bias-mitigation methods are restricted to specific model families such as logistic regression or support vector machine models, thus leaving modelers with a difficult decision of choosing between fairness from the bias-mitigation algorithms and scalability, transparency, performance from algorithms such as XGBoost. We aim to leverage the best of both worlds by proposing a fair variant of XGBoost that enjoys all the advantages of XGBoost, while also matching the levels of fairness from the state-of-the-art bias-mitigation algorithms. Furthermore, the proposed solution requires very little in terms of changes to the original XGBoost library, thus making it easy for adoption. We provide an empirical analysis of our proposed method on standard benchmark datasets used in the fairness community.</abstract><doi>10.48550/arxiv.2009.01442</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2009.01442
ispartof
issn
language eng
recordid cdi_arxiv_primary_2009_01442
source arXiv.org
subjects Computer Science - Artificial Intelligence
title FairXGBoost: Fairness-aware Classification in XGBoost
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T01%3A28%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FairXGBoost:%20Fairness-aware%20Classification%20in%20XGBoost&rft.au=Ravichandran,%20Srinivasan&rft.date=2020-09-03&rft_id=info:doi/10.48550/arxiv.2009.01442&rft_dat=%3Carxiv_GOX%3E2009_01442%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true