Re-imagining Algorithmic Fairness in India and Beyond
Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several ass...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-01 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sambasivan, Nithya Arnesen, Erin Hutchinson, Ben Doshi, Tulsee Prabhakaran, Vinodkumar |
description | Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2480952440</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2480952440</sourcerecordid><originalsourceid>FETCH-proquest_journals_24809524403</originalsourceid><addsrcrecordid>eNqNyrEKwjAUQNEgCBbtPwScA_El0TqqWHQV9xJMjK-0L5q0g3-vgx_gdIdzJ6wApVai0gAzVubcSilhvQFjVMHMxQvsbUBCCnzXhZhwePR447XFRD5njsTP5NByS47v_TuSW7Dp3XbZl7_O2bI-Xg8n8UzxNfo8NG0cE32pAV3JrQGtpfrv-gBRXjRb</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2480952440</pqid></control><display><type>article</type><title>Re-imagining Algorithmic Fairness in India and Beyond</title><source>Free E- Journals</source><creator>Sambasivan, Nithya ; Arnesen, Erin ; Hutchinson, Ben ; Doshi, Tulsee ; Prabhakaran, Vinodkumar</creator><creatorcontrib>Sambasivan, Nithya ; Arnesen, Erin ; Hutchinson, Ben ; Doshi, Tulsee ; Prabhakaran, Vinodkumar</creatorcontrib><description>Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Economic factors ; Qualitative analysis ; Social factors</subject><ispartof>arXiv.org, 2021-01</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sambasivan, Nithya</creatorcontrib><creatorcontrib>Arnesen, Erin</creatorcontrib><creatorcontrib>Hutchinson, Ben</creatorcontrib><creatorcontrib>Doshi, Tulsee</creatorcontrib><creatorcontrib>Prabhakaran, Vinodkumar</creatorcontrib><title>Re-imagining Algorithmic Fairness in India and Beyond</title><title>arXiv.org</title><description>Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.</description><subject>Algorithms</subject><subject>Economic factors</subject><subject>Qualitative analysis</subject><subject>Social factors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyrEKwjAUQNEgCBbtPwScA_El0TqqWHQV9xJMjK-0L5q0g3-vgx_gdIdzJ6wApVai0gAzVubcSilhvQFjVMHMxQvsbUBCCnzXhZhwePR447XFRD5njsTP5NByS47v_TuSW7Dp3XbZl7_O2bI-Xg8n8UzxNfo8NG0cE32pAV3JrQGtpfrv-gBRXjRb</recordid><startdate>20210127</startdate><enddate>20210127</enddate><creator>Sambasivan, Nithya</creator><creator>Arnesen, Erin</creator><creator>Hutchinson, Ben</creator><creator>Doshi, Tulsee</creator><creator>Prabhakaran, Vinodkumar</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210127</creationdate><title>Re-imagining Algorithmic Fairness in India and Beyond</title><author>Sambasivan, Nithya ; Arnesen, Erin ; Hutchinson, Ben ; Doshi, Tulsee ; Prabhakaran, Vinodkumar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24809524403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Economic factors</topic><topic>Qualitative analysis</topic><topic>Social factors</topic><toplevel>online_resources</toplevel><creatorcontrib>Sambasivan, Nithya</creatorcontrib><creatorcontrib>Arnesen, Erin</creatorcontrib><creatorcontrib>Hutchinson, Ben</creatorcontrib><creatorcontrib>Doshi, Tulsee</creatorcontrib><creatorcontrib>Prabhakaran, Vinodkumar</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sambasivan, Nithya</au><au>Arnesen, Erin</au><au>Hutchinson, Ben</au><au>Doshi, Tulsee</au><au>Prabhakaran, Vinodkumar</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Re-imagining Algorithmic Fairness in India and Beyond</atitle><jtitle>arXiv.org</jtitle><date>2021-01-27</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2480952440 |
source | Free E- Journals |
subjects | Algorithms Economic factors Qualitative analysis Social factors |
title | Re-imagining Algorithmic Fairness in India and Beyond |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T05%3A27%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Re-imagining%20Algorithmic%20Fairness%20in%20India%20and%20Beyond&rft.jtitle=arXiv.org&rft.au=Sambasivan,%20Nithya&rft.date=2021-01-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2480952440%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2480952440&rft_id=info:pmid/&rfr_iscdi=true |