Constructing categories: Moving beyond protected classes in algorithmic fairness

Automated, data‐driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally eff...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of the American Society for Information Science and Technology 2023-06, Vol.74 (6), p.663-668
Hauptverfasser: Belitz, Clara, Ocumpaugh, Jaclyn, Ritter, Steven, Baker, Ryan S., Fancsali, Stephen E., Bosch, Nigel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 668
container_issue 6
container_start_page 663
container_title Journal of the American Society for Information Science and Technology
container_volume 74
creator Belitz, Clara
Ocumpaugh, Jaclyn
Ritter, Steven
Baker, Ryan S.
Fancsali, Stephen E.
Bosch, Nigel
description Automated, data‐driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context‐specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well‐served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.
doi_str_mv 10.1002/asi.24643
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2808484100</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2808484100</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2973-ec45451faa5a62148644cc74e723bd72125e3e83a68ec659d4556fe1a5af22703</originalsourceid><addsrcrecordid>eNp1kMtKAzEUhoMoWGoXvkHAlYtpc51J3ZXipVBRUNchzZypKdNJTVKlb2_qiDtX5-fwnQsfQpeUjCkhbGKiGzNRCn6CBoxzUtCcT_8yl-doFOOGEELJVElGB-h57ruYwt4m162xNQnWPjiIN_jRfx5bKzj4rsa74BPYBDW2rYkRInYdNu0RTu9bZ3FjXOggxgt01pg2wui3DtHb3e3r_KFYPt0v5rNlYdm04gVYIYWkjTHSlIwKVQphbSWgYnxVV4wyCRwUN6UCW8ppLaQsG6AZbxirCB-iq35v_uxjDzHpjd-HLp_UTBEllMhKMnXdUzb4GAM0ehfc1oSDpkQfnensTP84y-ykZ79cC4f_QT17WfQT37B-bT8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2808484100</pqid></control><display><type>article</type><title>Constructing categories: Moving beyond protected classes in algorithmic fairness</title><source>Wiley Online Library Journals Frontfile Complete</source><source>Business Source Complete</source><creator>Belitz, Clara ; Ocumpaugh, Jaclyn ; Ritter, Steven ; Baker, Ryan S. ; Fancsali, Stephen E. ; Bosch, Nigel</creator><creatorcontrib>Belitz, Clara ; Ocumpaugh, Jaclyn ; Ritter, Steven ; Baker, Ryan S. ; Fancsali, Stephen E. ; Bosch, Nigel</creatorcontrib><description>Automated, data‐driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context‐specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well‐served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.</description><identifier>ISSN: 2330-1635</identifier><identifier>EISSN: 2330-1643</identifier><identifier>DOI: 10.1002/asi.24643</identifier><language>eng</language><publisher>Hoboken, USA: John Wiley &amp; Sons, Inc</publisher><subject>Algorithms ; Artificial Intelligence ; Categories ; Decision making ; Education ; Educational software ; Group theory ; Machine learning ; Students</subject><ispartof>Journal of the American Society for Information Science and Technology, 2023-06, Vol.74 (6), p.663-668</ispartof><rights>2022 Association for Information Science and Technology</rights><rights>2023 Association for Information Science and Technology</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2973-ec45451faa5a62148644cc74e723bd72125e3e83a68ec659d4556fe1a5af22703</citedby><cites>FETCH-LOGICAL-c2973-ec45451faa5a62148644cc74e723bd72125e3e83a68ec659d4556fe1a5af22703</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fasi.24643$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fasi.24643$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Belitz, Clara</creatorcontrib><creatorcontrib>Ocumpaugh, Jaclyn</creatorcontrib><creatorcontrib>Ritter, Steven</creatorcontrib><creatorcontrib>Baker, Ryan S.</creatorcontrib><creatorcontrib>Fancsali, Stephen E.</creatorcontrib><creatorcontrib>Bosch, Nigel</creatorcontrib><title>Constructing categories: Moving beyond protected classes in algorithmic fairness</title><title>Journal of the American Society for Information Science and Technology</title><description>Automated, data‐driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context‐specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well‐served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Categories</subject><subject>Decision making</subject><subject>Education</subject><subject>Educational software</subject><subject>Group theory</subject><subject>Machine learning</subject><subject>Students</subject><issn>2330-1635</issn><issn>2330-1643</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kMtKAzEUhoMoWGoXvkHAlYtpc51J3ZXipVBRUNchzZypKdNJTVKlb2_qiDtX5-fwnQsfQpeUjCkhbGKiGzNRCn6CBoxzUtCcT_8yl-doFOOGEELJVElGB-h57ruYwt4m162xNQnWPjiIN_jRfx5bKzj4rsa74BPYBDW2rYkRInYdNu0RTu9bZ3FjXOggxgt01pg2wui3DtHb3e3r_KFYPt0v5rNlYdm04gVYIYWkjTHSlIwKVQphbSWgYnxVV4wyCRwUN6UCW8ppLaQsG6AZbxirCB-iq35v_uxjDzHpjd-HLp_UTBEllMhKMnXdUzb4GAM0ehfc1oSDpkQfnensTP84y-ykZ79cC4f_QT17WfQT37B-bT8</recordid><startdate>202306</startdate><enddate>202306</enddate><creator>Belitz, Clara</creator><creator>Ocumpaugh, Jaclyn</creator><creator>Ritter, Steven</creator><creator>Baker, Ryan S.</creator><creator>Fancsali, Stephen E.</creator><creator>Bosch, Nigel</creator><general>John Wiley &amp; Sons, Inc</general><general>Wiley Periodicals Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>E3H</scope><scope>F2A</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202306</creationdate><title>Constructing categories: Moving beyond protected classes in algorithmic fairness</title><author>Belitz, Clara ; Ocumpaugh, Jaclyn ; Ritter, Steven ; Baker, Ryan S. ; Fancsali, Stephen E. ; Bosch, Nigel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2973-ec45451faa5a62148644cc74e723bd72125e3e83a68ec659d4556fe1a5af22703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Categories</topic><topic>Decision making</topic><topic>Education</topic><topic>Educational software</topic><topic>Group theory</topic><topic>Machine learning</topic><topic>Students</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Belitz, Clara</creatorcontrib><creatorcontrib>Ocumpaugh, Jaclyn</creatorcontrib><creatorcontrib>Ritter, Steven</creatorcontrib><creatorcontrib>Baker, Ryan S.</creatorcontrib><creatorcontrib>Fancsali, Stephen E.</creatorcontrib><creatorcontrib>Bosch, Nigel</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Library &amp; Information Sciences Abstracts (LISA)</collection><collection>Library &amp; Information Science Abstracts (LISA)</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of the American Society for Information Science and Technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Belitz, Clara</au><au>Ocumpaugh, Jaclyn</au><au>Ritter, Steven</au><au>Baker, Ryan S.</au><au>Fancsali, Stephen E.</au><au>Bosch, Nigel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Constructing categories: Moving beyond protected classes in algorithmic fairness</atitle><jtitle>Journal of the American Society for Information Science and Technology</jtitle><date>2023-06</date><risdate>2023</risdate><volume>74</volume><issue>6</issue><spage>663</spage><epage>668</epage><pages>663-668</pages><issn>2330-1635</issn><eissn>2330-1643</eissn><abstract>Automated, data‐driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context‐specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well‐served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.</abstract><cop>Hoboken, USA</cop><pub>John Wiley &amp; Sons, Inc</pub><doi>10.1002/asi.24643</doi><tpages>6</tpages></addata></record>
fulltext fulltext
identifier ISSN: 2330-1635
ispartof Journal of the American Society for Information Science and Technology, 2023-06, Vol.74 (6), p.663-668
issn 2330-1635
2330-1643
language eng
recordid cdi_proquest_journals_2808484100
source Wiley Online Library Journals Frontfile Complete; Business Source Complete
subjects Algorithms
Artificial Intelligence
Categories
Decision making
Education
Educational software
Group theory
Machine learning
Students
title Constructing categories: Moving beyond protected classes in algorithmic fairness
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T01%3A13%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Constructing%20categories:%20Moving%20beyond%20protected%20classes%20in%20algorithmic%20fairness&rft.jtitle=Journal%20of%20the%20American%20Society%20for%20Information%20Science%20and%20Technology&rft.au=Belitz,%20Clara&rft.date=2023-06&rft.volume=74&rft.issue=6&rft.spage=663&rft.epage=668&rft.pages=663-668&rft.issn=2330-1635&rft.eissn=2330-1643&rft_id=info:doi/10.1002/asi.24643&rft_dat=%3Cproquest_cross%3E2808484100%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2808484100&rft_id=info:pmid/&rfr_iscdi=true