NBIAS: A Natural Language Processing Framework for Bias Identification in Text
Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-08 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Raza, Shaina Garg, Muskan Reji, Deepak John Syed Raza Bashir Chen, Ding |
description | Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework NBIAS that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity BIAS. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2845952277</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2845952277</sourcerecordid><originalsourceid>FETCH-proquest_journals_28459522773</originalsourceid><addsrcrecordid>eNqNys0KgkAUQOEhCJLyHS60FmzGSWunkRSEBLmXi40yZjM1P9Tj16IHaHUW35mQgDK2irKE0hkJrR3iOKbrlHLOAlJVxTG_bCGHCp03OMIJVe-xF3A2uhXWStVDafAuXtrcoNMGCokWjlehnOxki05qBVJBLd5uQaYdjlaEv87JstzXu0P0MPrphXXNoL1RX2polvANpzRN2X_XBxS2PRM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2845952277</pqid></control><display><type>article</type><title>NBIAS: A Natural Language Processing Framework for Bias Identification in Text</title><source>Free E- Journals</source><creator>Raza, Shaina ; Garg, Muskan ; Reji, Deepak John ; Syed Raza Bashir ; Chen, Ding</creator><creatorcontrib>Raza, Shaina ; Garg, Muskan ; Reji, Deepak John ; Syed Raza Bashir ; Chen, Ding</creatorcontrib><description>Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework NBIAS that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity BIAS. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Bias ; Ethics ; Natural language processing ; Robustness (mathematics)</subject><ispartof>arXiv.org, 2023-08</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Raza, Shaina</creatorcontrib><creatorcontrib>Garg, Muskan</creatorcontrib><creatorcontrib>Reji, Deepak John</creatorcontrib><creatorcontrib>Syed Raza Bashir</creatorcontrib><creatorcontrib>Chen, Ding</creatorcontrib><title>NBIAS: A Natural Language Processing Framework for Bias Identification in Text</title><title>arXiv.org</title><description>Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework NBIAS that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity BIAS. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data.</description><subject>Algorithms</subject><subject>Bias</subject><subject>Ethics</subject><subject>Natural language processing</subject><subject>Robustness (mathematics)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNys0KgkAUQOEhCJLyHS60FmzGSWunkRSEBLmXi40yZjM1P9Tj16IHaHUW35mQgDK2irKE0hkJrR3iOKbrlHLOAlJVxTG_bCGHCp03OMIJVe-xF3A2uhXWStVDafAuXtrcoNMGCokWjlehnOxki05qBVJBLd5uQaYdjlaEv87JstzXu0P0MPrphXXNoL1RX2polvANpzRN2X_XBxS2PRM</recordid><startdate>20230829</startdate><enddate>20230829</enddate><creator>Raza, Shaina</creator><creator>Garg, Muskan</creator><creator>Reji, Deepak John</creator><creator>Syed Raza Bashir</creator><creator>Chen, Ding</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230829</creationdate><title>NBIAS: A Natural Language Processing Framework for Bias Identification in Text</title><author>Raza, Shaina ; Garg, Muskan ; Reji, Deepak John ; Syed Raza Bashir ; Chen, Ding</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28459522773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Bias</topic><topic>Ethics</topic><topic>Natural language processing</topic><topic>Robustness (mathematics)</topic><toplevel>online_resources</toplevel><creatorcontrib>Raza, Shaina</creatorcontrib><creatorcontrib>Garg, Muskan</creatorcontrib><creatorcontrib>Reji, Deepak John</creatorcontrib><creatorcontrib>Syed Raza Bashir</creatorcontrib><creatorcontrib>Chen, Ding</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Raza, Shaina</au><au>Garg, Muskan</au><au>Reji, Deepak John</au><au>Syed Raza Bashir</au><au>Chen, Ding</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>NBIAS: A Natural Language Processing Framework for Bias Identification in Text</atitle><jtitle>arXiv.org</jtitle><date>2023-08-29</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework NBIAS that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity BIAS. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2845952277 |
source | Free E- Journals |
subjects | Algorithms Bias Ethics Natural language processing Robustness (mathematics) |
title | NBIAS: A Natural Language Processing Framework for Bias Identification in Text |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T08%3A50%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=NBIAS:%20A%20Natural%20Language%20Processing%20Framework%20for%20Bias%20Identification%20in%20Text&rft.jtitle=arXiv.org&rft.au=Raza,%20Shaina&rft.date=2023-08-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2845952277%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2845952277&rft_id=info:pmid/&rfr_iscdi=true |