Bias in human data: A feedback from social sciences

The fairness of human‐related software has become critical with its widespread use in our daily lives, where life‐changing decisions are made. However, with the use of these systems, many erroneous results emerged. Technologies have started to be developed to tackle unexpected results. As for the so...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Wiley interdisciplinary reviews. Data mining and knowledge discovery 2023-07, Vol.13 (4), p.e1498-n/a
Hauptverfasser: Takan, Savaş, Ergün, Duygu, Getir Yaman, Sinem, Kılınççeker, Onur
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page n/a
container_issue 4
container_start_page e1498
container_title Wiley interdisciplinary reviews. Data mining and knowledge discovery
container_volume 13
creator Takan, Savaş
Ergün, Duygu
Getir Yaman, Sinem
Kılınççeker, Onur
description The fairness of human‐related software has become critical with its widespread use in our daily lives, where life‐changing decisions are made. However, with the use of these systems, many erroneous results emerged. Technologies have started to be developed to tackle unexpected results. As for the solution to the issue, companies generally focus on algorithm‐oriented errors. The utilized solutions usually only work in some algorithms. Because the cause of the problem is not just the algorithm; it is also the data itself. For instance, deep learning cannot establish the cause–effect relationship quickly. In addition, the boundaries between statistical or heuristic algorithms are unclear. The algorithm's fairness may vary depending on the data related to context. From this point of view, our article focuses on how the data should be, which is not a matter of statistics. In this direction, the picture in question has been revealed through a scenario specific to “vulnerable and disadvantaged” groups, which is one of the most fundamental problems today. With the joint contribution of computer science and social sciences, it aims to predict the possible social dangers that may arise from artificial intelligence algorithms using the clues obtained in this study. To highlight the potential social and mass problems caused by data, Gerbner's “cultivation theory” is reinterpreted. To this end, we conduct an experimental evaluation on popular algorithms and their data sets, such as Word2Vec, GloVe, and ELMO. The article stresses the importance of a holistic approach combining the algorithm, data, and an interdisciplinary assessment. This article is categorized under: Algorithmic Development > Statistics The human‐machine cultivation cylcle.
doi_str_mv 10.1002/widm.1498
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2836191208</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2836191208</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2928-f6493df9e8c9cf98b20b417a5fa7e0412f27bc92ee7ec34e79b6e87708efbf083</originalsourceid><addsrcrecordid>eNp1kD1PwzAQhi0EElXpwD-wxMSQ1nbc2GYrpUClIhYQo-U4Z-GSj2I3qvrvSRrExi13w3PvKz0IXVMypYSw2cEX1ZRyJc_QiCrOEi7U_PzvluISTWLckm5SJqVkI5TeexOxr_FnW5kaF2Zv7vACO4AiN_YLu9BUODbWmxJH66G2EK_QhTNlhMnvHqP3x9Xb8jnZvD6tl4tNYpliMnEZV2nhFEirrFMyZyTnVJi5MwIIp8wxkVvFAATYlINQeQZSCCLB5Y7IdIxuhtxdaL5biHu9bdpQd5WayTSjirITdTtQNjQxBnB6F3xlwlFTonstuteiey0dOxvYgy_h-D-oP9YPL6ePH95RYrw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2836191208</pqid></control><display><type>article</type><title>Bias in human data: A feedback from social sciences</title><source>Access via Wiley Online Library</source><creator>Takan, Savaş ; Ergün, Duygu ; Getir Yaman, Sinem ; Kılınççeker, Onur</creator><creatorcontrib>Takan, Savaş ; Ergün, Duygu ; Getir Yaman, Sinem ; Kılınççeker, Onur</creatorcontrib><description>The fairness of human‐related software has become critical with its widespread use in our daily lives, where life‐changing decisions are made. However, with the use of these systems, many erroneous results emerged. Technologies have started to be developed to tackle unexpected results. As for the solution to the issue, companies generally focus on algorithm‐oriented errors. The utilized solutions usually only work in some algorithms. Because the cause of the problem is not just the algorithm; it is also the data itself. For instance, deep learning cannot establish the cause–effect relationship quickly. In addition, the boundaries between statistical or heuristic algorithms are unclear. The algorithm's fairness may vary depending on the data related to context. From this point of view, our article focuses on how the data should be, which is not a matter of statistics. In this direction, the picture in question has been revealed through a scenario specific to “vulnerable and disadvantaged” groups, which is one of the most fundamental problems today. With the joint contribution of computer science and social sciences, it aims to predict the possible social dangers that may arise from artificial intelligence algorithms using the clues obtained in this study. To highlight the potential social and mass problems caused by data, Gerbner's “cultivation theory” is reinterpreted. To this end, we conduct an experimental evaluation on popular algorithms and their data sets, such as Word2Vec, GloVe, and ELMO. The article stresses the importance of a holistic approach combining the algorithm, data, and an interdisciplinary assessment. This article is categorized under: Algorithmic Development &gt; Statistics The human‐machine cultivation cylcle.</description><identifier>ISSN: 1942-4787</identifier><identifier>EISSN: 1942-4795</identifier><identifier>DOI: 10.1002/widm.1498</identifier><language>eng</language><publisher>Hoboken, USA: Wiley Periodicals, Inc</publisher><subject>Algorithms ; Artificial intelligence ; cultivation theory ; data bias ; fairness ; Heuristic methods ; Machine learning ; new media ; social computing ; social science ; Social sciences</subject><ispartof>Wiley interdisciplinary reviews. Data mining and knowledge discovery, 2023-07, Vol.13 (4), p.e1498-n/a</ispartof><rights>2023 Wiley Periodicals LLC.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2928-f6493df9e8c9cf98b20b417a5fa7e0412f27bc92ee7ec34e79b6e87708efbf083</cites><orcidid>0000-0002-7718-9476 ; 0000-0002-5639-8615</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fwidm.1498$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fwidm.1498$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Takan, Savaş</creatorcontrib><creatorcontrib>Ergün, Duygu</creatorcontrib><creatorcontrib>Getir Yaman, Sinem</creatorcontrib><creatorcontrib>Kılınççeker, Onur</creatorcontrib><title>Bias in human data: A feedback from social sciences</title><title>Wiley interdisciplinary reviews. Data mining and knowledge discovery</title><description>The fairness of human‐related software has become critical with its widespread use in our daily lives, where life‐changing decisions are made. However, with the use of these systems, many erroneous results emerged. Technologies have started to be developed to tackle unexpected results. As for the solution to the issue, companies generally focus on algorithm‐oriented errors. The utilized solutions usually only work in some algorithms. Because the cause of the problem is not just the algorithm; it is also the data itself. For instance, deep learning cannot establish the cause–effect relationship quickly. In addition, the boundaries between statistical or heuristic algorithms are unclear. The algorithm's fairness may vary depending on the data related to context. From this point of view, our article focuses on how the data should be, which is not a matter of statistics. In this direction, the picture in question has been revealed through a scenario specific to “vulnerable and disadvantaged” groups, which is one of the most fundamental problems today. With the joint contribution of computer science and social sciences, it aims to predict the possible social dangers that may arise from artificial intelligence algorithms using the clues obtained in this study. To highlight the potential social and mass problems caused by data, Gerbner's “cultivation theory” is reinterpreted. To this end, we conduct an experimental evaluation on popular algorithms and their data sets, such as Word2Vec, GloVe, and ELMO. The article stresses the importance of a holistic approach combining the algorithm, data, and an interdisciplinary assessment. This article is categorized under: Algorithmic Development &gt; Statistics The human‐machine cultivation cylcle.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>cultivation theory</subject><subject>data bias</subject><subject>fairness</subject><subject>Heuristic methods</subject><subject>Machine learning</subject><subject>new media</subject><subject>social computing</subject><subject>social science</subject><subject>Social sciences</subject><issn>1942-4787</issn><issn>1942-4795</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kD1PwzAQhi0EElXpwD-wxMSQ1nbc2GYrpUClIhYQo-U4Z-GSj2I3qvrvSRrExi13w3PvKz0IXVMypYSw2cEX1ZRyJc_QiCrOEi7U_PzvluISTWLckm5SJqVkI5TeexOxr_FnW5kaF2Zv7vACO4AiN_YLu9BUODbWmxJH66G2EK_QhTNlhMnvHqP3x9Xb8jnZvD6tl4tNYpliMnEZV2nhFEirrFMyZyTnVJi5MwIIp8wxkVvFAATYlINQeQZSCCLB5Y7IdIxuhtxdaL5biHu9bdpQd5WayTSjirITdTtQNjQxBnB6F3xlwlFTonstuteiey0dOxvYgy_h-D-oP9YPL6ePH95RYrw</recordid><startdate>202307</startdate><enddate>202307</enddate><creator>Takan, Savaş</creator><creator>Ergün, Duygu</creator><creator>Getir Yaman, Sinem</creator><creator>Kılınççeker, Onur</creator><general>Wiley Periodicals, Inc</general><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-7718-9476</orcidid><orcidid>https://orcid.org/0000-0002-5639-8615</orcidid></search><sort><creationdate>202307</creationdate><title>Bias in human data: A feedback from social sciences</title><author>Takan, Savaş ; Ergün, Duygu ; Getir Yaman, Sinem ; Kılınççeker, Onur</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2928-f6493df9e8c9cf98b20b417a5fa7e0412f27bc92ee7ec34e79b6e87708efbf083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>cultivation theory</topic><topic>data bias</topic><topic>fairness</topic><topic>Heuristic methods</topic><topic>Machine learning</topic><topic>new media</topic><topic>social computing</topic><topic>social science</topic><topic>Social sciences</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Takan, Savaş</creatorcontrib><creatorcontrib>Ergün, Duygu</creatorcontrib><creatorcontrib>Getir Yaman, Sinem</creatorcontrib><creatorcontrib>Kılınççeker, Onur</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Wiley interdisciplinary reviews. Data mining and knowledge discovery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Takan, Savaş</au><au>Ergün, Duygu</au><au>Getir Yaman, Sinem</au><au>Kılınççeker, Onur</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bias in human data: A feedback from social sciences</atitle><jtitle>Wiley interdisciplinary reviews. Data mining and knowledge discovery</jtitle><date>2023-07</date><risdate>2023</risdate><volume>13</volume><issue>4</issue><spage>e1498</spage><epage>n/a</epage><pages>e1498-n/a</pages><issn>1942-4787</issn><eissn>1942-4795</eissn><abstract>The fairness of human‐related software has become critical with its widespread use in our daily lives, where life‐changing decisions are made. However, with the use of these systems, many erroneous results emerged. Technologies have started to be developed to tackle unexpected results. As for the solution to the issue, companies generally focus on algorithm‐oriented errors. The utilized solutions usually only work in some algorithms. Because the cause of the problem is not just the algorithm; it is also the data itself. For instance, deep learning cannot establish the cause–effect relationship quickly. In addition, the boundaries between statistical or heuristic algorithms are unclear. The algorithm's fairness may vary depending on the data related to context. From this point of view, our article focuses on how the data should be, which is not a matter of statistics. In this direction, the picture in question has been revealed through a scenario specific to “vulnerable and disadvantaged” groups, which is one of the most fundamental problems today. With the joint contribution of computer science and social sciences, it aims to predict the possible social dangers that may arise from artificial intelligence algorithms using the clues obtained in this study. To highlight the potential social and mass problems caused by data, Gerbner's “cultivation theory” is reinterpreted. To this end, we conduct an experimental evaluation on popular algorithms and their data sets, such as Word2Vec, GloVe, and ELMO. The article stresses the importance of a holistic approach combining the algorithm, data, and an interdisciplinary assessment. This article is categorized under: Algorithmic Development &gt; Statistics The human‐machine cultivation cylcle.</abstract><cop>Hoboken, USA</cop><pub>Wiley Periodicals, Inc</pub><doi>10.1002/widm.1498</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-7718-9476</orcidid><orcidid>https://orcid.org/0000-0002-5639-8615</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1942-4787
ispartof Wiley interdisciplinary reviews. Data mining and knowledge discovery, 2023-07, Vol.13 (4), p.e1498-n/a
issn 1942-4787
1942-4795
language eng
recordid cdi_proquest_journals_2836191208
source Access via Wiley Online Library
subjects Algorithms
Artificial intelligence
cultivation theory
data bias
fairness
Heuristic methods
Machine learning
new media
social computing
social science
Social sciences
title Bias in human data: A feedback from social sciences
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T15%3A52%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bias%20in%20human%20data:%20A%20feedback%20from%20social%20sciences&rft.jtitle=Wiley%20interdisciplinary%20reviews.%20Data%20mining%20and%20knowledge%20discovery&rft.au=Takan,%20Sava%C5%9F&rft.date=2023-07&rft.volume=13&rft.issue=4&rft.spage=e1498&rft.epage=n/a&rft.pages=e1498-n/a&rft.issn=1942-4787&rft.eissn=1942-4795&rft_id=info:doi/10.1002/widm.1498&rft_dat=%3Cproquest_cross%3E2836191208%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2836191208&rft_id=info:pmid/&rfr_iscdi=true