Games for Fairness and Interpretability

As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chu, Eric, Gillani, Nabeel, Makini, Sneha Priscilla
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chu, Eric
Gillani, Nabeel
Makini, Sneha Priscilla
description As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms -- and that one way to cultivate public pressure while simultaneously serving the interests and objectives of algorithm developers is through gameplay. We propose a new class of games -- ``games for fairness and interpretability'' -- as one example of an incentive-aligned approach for producing fairer and more equitable algorithms. Games for fairness and interpretability are carefully-designed games with mass appeal. They are inherently engaging, provide insights into how machine learning models work, and ultimately produce data that helps researchers and developers improve their algorithms. We highlight several possible examples of games, their implications for fairness and interpretability, how their proliferation could creative positive public pressure by narrowing the gap between algorithm developers and the general public, and why the machine learning community could benefit from them.
doi_str_mv 10.48550/arxiv.2004.09551
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2004_09551</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2004_09551</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-f27f808c2bd3b3ae7cd5087395f2e8d083199be27f9d8b072438c99165c3d453</originalsourceid><addsrcrecordid>eNotzrEKwjAUheEsDlJ9ACe7ObXeJE2TjCJaBcFB93LTJBDQKmkRfXu1Op3l5_ARMqOQF0oIWGJ8hkfOAIoctBB0TBYVXl2X-ltMtxhi67ouxdam-7Z38R5djyZcQv-akJHHS-em_03Iabs5r3fZ4Vjt16tDhqWkmWfSK1ANM5Ybjk42VoCSXAvPnLKgONXauE-lrTIgWcFVozUtRcNtIXhC5r_XAVrfY7hifNVfcD2A-RtEKznK</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Games for Fairness and Interpretability</title><source>arXiv.org</source><creator>Chu, Eric ; Gillani, Nabeel ; Makini, Sneha Priscilla</creator><creatorcontrib>Chu, Eric ; Gillani, Nabeel ; Makini, Sneha Priscilla</creatorcontrib><description>As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms -- and that one way to cultivate public pressure while simultaneously serving the interests and objectives of algorithm developers is through gameplay. We propose a new class of games -- ``games for fairness and interpretability'' -- as one example of an incentive-aligned approach for producing fairer and more equitable algorithms. Games for fairness and interpretability are carefully-designed games with mass appeal. They are inherently engaging, provide insights into how machine learning models work, and ultimately produce data that helps researchers and developers improve their algorithms. We highlight several possible examples of games, their implications for fairness and interpretability, how their proliferation could creative positive public pressure by narrowing the gap between algorithm developers and the general public, and why the machine learning community could benefit from them.</description><identifier>DOI: 10.48550/arxiv.2004.09551</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computers and Society</subject><creationdate>2020-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2004.09551$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2004.09551$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chu, Eric</creatorcontrib><creatorcontrib>Gillani, Nabeel</creatorcontrib><creatorcontrib>Makini, Sneha Priscilla</creatorcontrib><title>Games for Fairness and Interpretability</title><description>As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms -- and that one way to cultivate public pressure while simultaneously serving the interests and objectives of algorithm developers is through gameplay. We propose a new class of games -- ``games for fairness and interpretability'' -- as one example of an incentive-aligned approach for producing fairer and more equitable algorithms. Games for fairness and interpretability are carefully-designed games with mass appeal. They are inherently engaging, provide insights into how machine learning models work, and ultimately produce data that helps researchers and developers improve their algorithms. We highlight several possible examples of games, their implications for fairness and interpretability, how their proliferation could creative positive public pressure by narrowing the gap between algorithm developers and the general public, and why the machine learning community could benefit from them.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computers and Society</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrEKwjAUheEsDlJ9ACe7ObXeJE2TjCJaBcFB93LTJBDQKmkRfXu1Op3l5_ARMqOQF0oIWGJ8hkfOAIoctBB0TBYVXl2X-ltMtxhi67ouxdam-7Z38R5djyZcQv-akJHHS-em_03Iabs5r3fZ4Vjt16tDhqWkmWfSK1ANM5Ybjk42VoCSXAvPnLKgONXauE-lrTIgWcFVozUtRcNtIXhC5r_XAVrfY7hifNVfcD2A-RtEKznK</recordid><startdate>20200420</startdate><enddate>20200420</enddate><creator>Chu, Eric</creator><creator>Gillani, Nabeel</creator><creator>Makini, Sneha Priscilla</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200420</creationdate><title>Games for Fairness and Interpretability</title><author>Chu, Eric ; Gillani, Nabeel ; Makini, Sneha Priscilla</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-f27f808c2bd3b3ae7cd5087395f2e8d083199be27f9d8b072438c99165c3d453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computers and Society</topic><toplevel>online_resources</toplevel><creatorcontrib>Chu, Eric</creatorcontrib><creatorcontrib>Gillani, Nabeel</creatorcontrib><creatorcontrib>Makini, Sneha Priscilla</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chu, Eric</au><au>Gillani, Nabeel</au><au>Makini, Sneha Priscilla</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Games for Fairness and Interpretability</atitle><date>2020-04-20</date><risdate>2020</risdate><abstract>As Machine Learning (ML) systems becomes more ubiquitous, ensuring the fair and equitable application of their underlying algorithms is of paramount importance. We argue that one way to achieve this is to proactively cultivate public pressure for ML developers to design and develop fairer algorithms -- and that one way to cultivate public pressure while simultaneously serving the interests and objectives of algorithm developers is through gameplay. We propose a new class of games -- ``games for fairness and interpretability'' -- as one example of an incentive-aligned approach for producing fairer and more equitable algorithms. Games for fairness and interpretability are carefully-designed games with mass appeal. They are inherently engaging, provide insights into how machine learning models work, and ultimately produce data that helps researchers and developers improve their algorithms. We highlight several possible examples of games, their implications for fairness and interpretability, how their proliferation could creative positive public pressure by narrowing the gap between algorithm developers and the general public, and why the machine learning community could benefit from them.</abstract><doi>10.48550/arxiv.2004.09551</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2004.09551
ispartof
issn
language eng
recordid cdi_arxiv_primary_2004_09551
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computers and Society
title Games for Fairness and Interpretability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T19%3A32%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Games%20for%20Fairness%20and%20Interpretability&rft.au=Chu,%20Eric&rft.date=2020-04-20&rft_id=info:doi/10.48550/arxiv.2004.09551&rft_dat=%3Carxiv_GOX%3E2004_09551%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true