Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities

Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:AI & society 2024-08, Vol.39 (4), p.2045-2057
Hauptverfasser: O’Connor, Sinead, Liu, Helen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2057
container_issue 4
container_start_page 2045
container_title AI & society
container_volume 39
creator O’Connor, Sinead
Liu, Helen
description Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.
doi_str_mv 10.1007/s00146-023-01675-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3092489049</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3092489049</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-b77c9974adf406b83ffdf40f8a64a928ae201f80b66479020f64af13845a421e3</originalsourceid><addsrcrecordid>eNp9kE1OwzAQhS0EEqVwAVaRWAfGv4nZVRWUSpXYwIKV5aR2myq1g50suA1n4WR1GyR2rObN6H0zmofQLYZ7DFA8RADMRA6E5oBFwXN2hiaYUZ5zwfk5moDkOGkhLtFVjDsAELwkE_SxMG5tQlY1Ov58dyZ0ph9033iXabfO9k3fbMa2cdlsmfWm3jrf-k1j4mNWb3XbGrcx8eT2XedDP7gEmXiNLqxuo7n5rVP0_vz0Nn_JV6-L5Xy2ymsqaJ9XRVFLWTC9tgxEVVJrj8qWWjAtSakNAWxLqIRghQQCNs0tpiXjmhFs6BTdjXu74D8HE3u180Nw6aSiIAkrJTCZXGR01cHHGIxVXWj2OnwpDOoYoRojVClCdYpQsQTREYrJnL4Mf6v_oQ4woHUp</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3092489049</pqid></control><display><type>article</type><title>Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities</title><source>SpringerLink Journals - AutoHoldings</source><creator>O’Connor, Sinead ; Liu, Helen</creator><creatorcontrib>O’Connor, Sinead ; Liu, Helen</creatorcontrib><description>Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.</description><identifier>ISSN: 0951-5666</identifier><identifier>EISSN: 1435-5655</identifier><identifier>DOI: 10.1007/s00146-023-01675-4</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Artificial Intelligence ; Bias ; Computer Science ; Control ; Decision making ; Engineering Economics ; Gender ; Human bias ; Logistics ; Marketing ; Mechatronics ; Methodology of the Social Sciences ; Open Forum ; Organization ; Performing Arts ; Public policy ; Robotics ; Technology assessment</subject><ispartof>AI &amp; society, 2024-08, Vol.39 (4), p.2045-2057</ispartof><rights>The Author(s) 2023</rights><rights>The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-b77c9974adf406b83ffdf40f8a64a928ae201f80b66479020f64af13845a421e3</citedby><cites>FETCH-LOGICAL-c363t-b77c9974adf406b83ffdf40f8a64a928ae201f80b66479020f64af13845a421e3</cites><orcidid>0000-0003-1968-2171</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00146-023-01675-4$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00146-023-01675-4$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>O’Connor, Sinead</creatorcontrib><creatorcontrib>Liu, Helen</creatorcontrib><title>Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities</title><title>AI &amp; society</title><addtitle>AI &amp; Soc</addtitle><description>Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.</description><subject>Artificial Intelligence</subject><subject>Bias</subject><subject>Computer Science</subject><subject>Control</subject><subject>Decision making</subject><subject>Engineering Economics</subject><subject>Gender</subject><subject>Human bias</subject><subject>Logistics</subject><subject>Marketing</subject><subject>Mechatronics</subject><subject>Methodology of the Social Sciences</subject><subject>Open Forum</subject><subject>Organization</subject><subject>Performing Arts</subject><subject>Public policy</subject><subject>Robotics</subject><subject>Technology assessment</subject><issn>0951-5666</issn><issn>1435-5655</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9kE1OwzAQhS0EEqVwAVaRWAfGv4nZVRWUSpXYwIKV5aR2myq1g50suA1n4WR1GyR2rObN6H0zmofQLYZ7DFA8RADMRA6E5oBFwXN2hiaYUZ5zwfk5moDkOGkhLtFVjDsAELwkE_SxMG5tQlY1Ov58dyZ0ph9033iXabfO9k3fbMa2cdlsmfWm3jrf-k1j4mNWb3XbGrcx8eT2XedDP7gEmXiNLqxuo7n5rVP0_vz0Nn_JV6-L5Xy2ymsqaJ9XRVFLWTC9tgxEVVJrj8qWWjAtSakNAWxLqIRghQQCNs0tpiXjmhFs6BTdjXu74D8HE3u180Nw6aSiIAkrJTCZXGR01cHHGIxVXWj2OnwpDOoYoRojVClCdYpQsQTREYrJnL4Mf6v_oQ4woHUp</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>O’Connor, Sinead</creator><creator>Liu, Helen</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TK</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1968-2171</orcidid></search><sort><creationdate>20240801</creationdate><title>Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities</title><author>O’Connor, Sinead ; Liu, Helen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-b77c9974adf406b83ffdf40f8a64a928ae201f80b66479020f64af13845a421e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Bias</topic><topic>Computer Science</topic><topic>Control</topic><topic>Decision making</topic><topic>Engineering Economics</topic><topic>Gender</topic><topic>Human bias</topic><topic>Logistics</topic><topic>Marketing</topic><topic>Mechatronics</topic><topic>Methodology of the Social Sciences</topic><topic>Open Forum</topic><topic>Organization</topic><topic>Performing Arts</topic><topic>Public policy</topic><topic>Robotics</topic><topic>Technology assessment</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>O’Connor, Sinead</creatorcontrib><creatorcontrib>Liu, Helen</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>AI &amp; society</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>O’Connor, Sinead</au><au>Liu, Helen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities</atitle><jtitle>AI &amp; society</jtitle><stitle>AI &amp; Soc</stitle><date>2024-08-01</date><risdate>2024</risdate><volume>39</volume><issue>4</issue><spage>2045</spage><epage>2057</epage><pages>2045-2057</pages><issn>0951-5666</issn><eissn>1435-5655</eissn><abstract>Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding of bias could influence policy and outcomes. Building on a rich seam of literature from both technological and sociological fields, this article constructs an original framework through which to analyse both the perpetuation and mitigation of gender biases, choosing to categorize AI technologies based on whether their input is text or images. Through the close analysis and pairing of four case studies, the paper thus unites two often disparate approaches to the investigation of bias in technology, revealing the large and varied potential for AI to echo and even amplify existing human bias, while acknowledging the important role AI itself can play in reducing or reversing these effects. The conclusion calls for further collaboration between scholars from the worlds of technology, gender studies and public policy in fully exploring algorithmic accountability as well as in accurately and transparently exploring the potential consequences of the introduction of AI technologies.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00146-023-01675-4</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-1968-2171</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0951-5666
ispartof AI & society, 2024-08, Vol.39 (4), p.2045-2057
issn 0951-5666
1435-5655
language eng
recordid cdi_proquest_journals_3092489049
source SpringerLink Journals - AutoHoldings
subjects Artificial Intelligence
Bias
Computer Science
Control
Decision making
Engineering Economics
Gender
Human bias
Logistics
Marketing
Mechatronics
Methodology of the Social Sciences
Open Forum
Organization
Performing Arts
Public policy
Robotics
Technology assessment
title Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T10%3A14%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Gender%20bias%C2%A0perpetuation%20and%20mitigation%20in%20AI%20technologies:%20challenges%20and%20opportunities&rft.jtitle=AI%20&%20society&rft.au=O%E2%80%99Connor,%20Sinead&rft.date=2024-08-01&rft.volume=39&rft.issue=4&rft.spage=2045&rft.epage=2057&rft.pages=2045-2057&rft.issn=0951-5666&rft.eissn=1435-5655&rft_id=info:doi/10.1007/s00146-023-01675-4&rft_dat=%3Cproquest_cross%3E3092489049%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3092489049&rft_id=info:pmid/&rfr_iscdi=true