Definition drives design: Disability models and mechanisms of bias in AI technologies

The increasing deployment of artificial intelligence (AI) tools to inform decision making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there ha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-11
Hauptverfasser: Newman-Griffis, Denis, Jessica Sage Rauchberg, Alharbi, Rahaf, Hickman, Louise, Hochheiser, Harry
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Newman-Griffis, Denis
Jessica Sage Rauchberg
Alharbi, Rahaf
Hickman, Louise
Hochheiser, Harry
description The increasing deployment of artificial intelligence (AI) tools to inform decision making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2677952398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2677952398</sourcerecordid><originalsourceid>FETCH-proquest_journals_26779523983</originalsourceid><addsrcrecordid>eNqNjrEOgjAUABsTE4nyDy9xJsFWBNyMaHTXmRT7wEdKqzw08e9l8AOcbrgbbiICqdQqytZSzkTI3MZxLDepTBIViGuBNTkayDswPb2RwSBT47ZQEOuKLA0f6LxBy6CdgQ5vd-2IOwZfQ0WagRzszjCMwnnrG0JeiGmtLWP441wsj4fL_hQ9ev98IQ9l61-9G1U5nqR5IlWeqf-qL7ccQL4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2677952398</pqid></control><display><type>article</type><title>Definition drives design: Disability models and mechanisms of bias in AI technologies</title><source>Free E- Journals</source><creator>Newman-Griffis, Denis ; Jessica Sage Rauchberg ; Alharbi, Rahaf ; Hickman, Louise ; Hochheiser, Harry</creator><creatorcontrib>Newman-Griffis, Denis ; Jessica Sage Rauchberg ; Alharbi, Rahaf ; Hickman, Louise ; Hochheiser, Harry</creatorcontrib><description>The increasing deployment of artificial intelligence (AI) tools to inform decision making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial intelligence ; Data analysis ; Decision analysis ; Decision making ; Design ; Disability ; Health care ; Mathematical analysis ; People with disabilities ; Public policy</subject><ispartof>arXiv.org, 2022-11</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Newman-Griffis, Denis</creatorcontrib><creatorcontrib>Jessica Sage Rauchberg</creatorcontrib><creatorcontrib>Alharbi, Rahaf</creatorcontrib><creatorcontrib>Hickman, Louise</creatorcontrib><creatorcontrib>Hochheiser, Harry</creatorcontrib><title>Definition drives design: Disability models and mechanisms of bias in AI technologies</title><title>arXiv.org</title><description>The increasing deployment of artificial intelligence (AI) tools to inform decision making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Data analysis</subject><subject>Decision analysis</subject><subject>Decision making</subject><subject>Design</subject><subject>Disability</subject><subject>Health care</subject><subject>Mathematical analysis</subject><subject>People with disabilities</subject><subject>Public policy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjrEOgjAUABsTE4nyDy9xJsFWBNyMaHTXmRT7wEdKqzw08e9l8AOcbrgbbiICqdQqytZSzkTI3MZxLDepTBIViGuBNTkayDswPb2RwSBT47ZQEOuKLA0f6LxBy6CdgQ5vd-2IOwZfQ0WagRzszjCMwnnrG0JeiGmtLWP441wsj4fL_hQ9ev98IQ9l61-9G1U5nqR5IlWeqf-qL7ccQL4</recordid><startdate>20221123</startdate><enddate>20221123</enddate><creator>Newman-Griffis, Denis</creator><creator>Jessica Sage Rauchberg</creator><creator>Alharbi, Rahaf</creator><creator>Hickman, Louise</creator><creator>Hochheiser, Harry</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221123</creationdate><title>Definition drives design: Disability models and mechanisms of bias in AI technologies</title><author>Newman-Griffis, Denis ; Jessica Sage Rauchberg ; Alharbi, Rahaf ; Hickman, Louise ; Hochheiser, Harry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26779523983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Data analysis</topic><topic>Decision analysis</topic><topic>Decision making</topic><topic>Design</topic><topic>Disability</topic><topic>Health care</topic><topic>Mathematical analysis</topic><topic>People with disabilities</topic><topic>Public policy</topic><toplevel>online_resources</toplevel><creatorcontrib>Newman-Griffis, Denis</creatorcontrib><creatorcontrib>Jessica Sage Rauchberg</creatorcontrib><creatorcontrib>Alharbi, Rahaf</creatorcontrib><creatorcontrib>Hickman, Louise</creatorcontrib><creatorcontrib>Hochheiser, Harry</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Newman-Griffis, Denis</au><au>Jessica Sage Rauchberg</au><au>Alharbi, Rahaf</au><au>Hickman, Louise</au><au>Hochheiser, Harry</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Definition drives design: Disability models and mechanisms of bias in AI technologies</atitle><jtitle>arXiv.org</jtitle><date>2022-11-23</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The increasing deployment of artificial intelligence (AI) tools to inform decision making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2677952398
source Free E- Journals
subjects Algorithms
Artificial intelligence
Data analysis
Decision analysis
Decision making
Design
Disability
Health care
Mathematical analysis
People with disabilities
Public policy
title Definition drives design: Disability models and mechanisms of bias in AI technologies
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T20%3A09%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Definition%20drives%20design:%20Disability%20models%20and%20mechanisms%20of%20bias%20in%20AI%20technologies&rft.jtitle=arXiv.org&rft.au=Newman-Griffis,%20Denis&rft.date=2022-11-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2677952398%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2677952398&rft_id=info:pmid/&rfr_iscdi=true