Towards Trustworthy Artificial Intelligence for Equitable Global Health
Artificial intelligence (AI) can potentially transform global health, but algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI entails the intentional design to ensure equity and mitigate potential biases. To advance trustworthy AI in global health, we convened a workshop...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-09 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Qin, Hong Kong, Jude Ding, Wandi Ahluwalia, Ramneek Christo El Morr Engin, Zeynep Effoduh, Jake Okechukwu Hwa, Rebecca Guo, Serena Jingchuan Seyyed-Kalantari, Laleh Muyingo, Sylvia Kiwuwa Moore, Candace Makeda Parikh, Ravi Schwartz, Reva Zhu, Dongxiao Wang, Xiaoqian Zhang, Yiye |
description | Artificial intelligence (AI) can potentially transform global health, but algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI entails the intentional design to ensure equity and mitigate potential biases. To advance trustworthy AI in global health, we convened a workshop on Fairness in Machine Intelligence for Global Health (FairMI4GH). The event brought together a global mix of experts from various disciplines, community health practitioners, policymakers, and more. Topics covered included managing AI bias in socio-technical systems, AI's potential impacts on global health, and balancing data privacy with transparency. Panel discussions examined the cultural, political, and ethical dimensions of AI in global health. FairMI4GH aimed to stimulate dialogue, facilitate knowledge transfer, and spark innovative solutions. Drawing from NIST's AI Risk Management Framework, it provided suggestions for handling AI risks and biases. The need to mitigate data biases from the research design stage, adopt a human-centered approach, and advocate for AI transparency was recognized. Challenges such as updating legal frameworks, managing cross-border data sharing, and motivating developers to reduce bias were acknowledged. The event emphasized the necessity of diverse viewpoints and multi-dimensional dialogue for creating a fair and ethical AI framework for equitable global health. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2864014493</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2864014493</sourcerecordid><originalsourceid>FETCH-proquest_journals_28640144933</originalsourceid><addsrcrecordid>eNqNykELgjAYgOERBEn5HwadhblNs2OEWXfvMm3mZLj89g3p3-ehH9DpPbzPhkRciDQpJOc7Ens_MsZ4fuJZJiJS1W5R8PS0huBxcYDDh14ATW86oyx9TKitNS89dZr2Dmg5B4OqtZpW1rWruGtlcTiQba-s1_Gve3K8lfX1nrzBzUF7bEYXYFpXw4tcslTKsxD_qS9FSDwG</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2864014493</pqid></control><display><type>article</type><title>Towards Trustworthy Artificial Intelligence for Equitable Global Health</title><source>Free E- Journals</source><creator>Qin, Hong ; Kong, Jude ; Ding, Wandi ; Ahluwalia, Ramneek ; Christo El Morr ; Engin, Zeynep ; Effoduh, Jake Okechukwu ; Hwa, Rebecca ; Guo, Serena Jingchuan ; Seyyed-Kalantari, Laleh ; Muyingo, Sylvia Kiwuwa ; Moore, Candace Makeda ; Parikh, Ravi ; Schwartz, Reva ; Zhu, Dongxiao ; Wang, Xiaoqian ; Zhang, Yiye</creator><creatorcontrib>Qin, Hong ; Kong, Jude ; Ding, Wandi ; Ahluwalia, Ramneek ; Christo El Morr ; Engin, Zeynep ; Effoduh, Jake Okechukwu ; Hwa, Rebecca ; Guo, Serena Jingchuan ; Seyyed-Kalantari, Laleh ; Muyingo, Sylvia Kiwuwa ; Moore, Candace Makeda ; Parikh, Ravi ; Schwartz, Reva ; Zhu, Dongxiao ; Wang, Xiaoqian ; Zhang, Yiye</creatorcontrib><description>Artificial intelligence (AI) can potentially transform global health, but algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI entails the intentional design to ensure equity and mitigate potential biases. To advance trustworthy AI in global health, we convened a workshop on Fairness in Machine Intelligence for Global Health (FairMI4GH). The event brought together a global mix of experts from various disciplines, community health practitioners, policymakers, and more. Topics covered included managing AI bias in socio-technical systems, AI's potential impacts on global health, and balancing data privacy with transparency. Panel discussions examined the cultural, political, and ethical dimensions of AI in global health. FairMI4GH aimed to stimulate dialogue, facilitate knowledge transfer, and spark innovative solutions. Drawing from NIST's AI Risk Management Framework, it provided suggestions for handling AI risks and biases. The need to mitigate data biases from the research design stage, adopt a human-centered approach, and advocate for AI transparency was recognized. Challenges such as updating legal frameworks, managing cross-border data sharing, and motivating developers to reduce bias were acknowledged. The event emphasized the necessity of diverse viewpoints and multi-dimensional dialogue for creating a fair and ethical AI framework for equitable global health.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Bias ; Data transparency ; Ethics ; Knowledge management ; Public health ; Risk management ; Trustworthiness</subject><ispartof>arXiv.org, 2023-09</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Qin, Hong</creatorcontrib><creatorcontrib>Kong, Jude</creatorcontrib><creatorcontrib>Ding, Wandi</creatorcontrib><creatorcontrib>Ahluwalia, Ramneek</creatorcontrib><creatorcontrib>Christo El Morr</creatorcontrib><creatorcontrib>Engin, Zeynep</creatorcontrib><creatorcontrib>Effoduh, Jake Okechukwu</creatorcontrib><creatorcontrib>Hwa, Rebecca</creatorcontrib><creatorcontrib>Guo, Serena Jingchuan</creatorcontrib><creatorcontrib>Seyyed-Kalantari, Laleh</creatorcontrib><creatorcontrib>Muyingo, Sylvia Kiwuwa</creatorcontrib><creatorcontrib>Moore, Candace Makeda</creatorcontrib><creatorcontrib>Parikh, Ravi</creatorcontrib><creatorcontrib>Schwartz, Reva</creatorcontrib><creatorcontrib>Zhu, Dongxiao</creatorcontrib><creatorcontrib>Wang, Xiaoqian</creatorcontrib><creatorcontrib>Zhang, Yiye</creatorcontrib><title>Towards Trustworthy Artificial Intelligence for Equitable Global Health</title><title>arXiv.org</title><description>Artificial intelligence (AI) can potentially transform global health, but algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI entails the intentional design to ensure equity and mitigate potential biases. To advance trustworthy AI in global health, we convened a workshop on Fairness in Machine Intelligence for Global Health (FairMI4GH). The event brought together a global mix of experts from various disciplines, community health practitioners, policymakers, and more. Topics covered included managing AI bias in socio-technical systems, AI's potential impacts on global health, and balancing data privacy with transparency. Panel discussions examined the cultural, political, and ethical dimensions of AI in global health. FairMI4GH aimed to stimulate dialogue, facilitate knowledge transfer, and spark innovative solutions. Drawing from NIST's AI Risk Management Framework, it provided suggestions for handling AI risks and biases. The need to mitigate data biases from the research design stage, adopt a human-centered approach, and advocate for AI transparency was recognized. Challenges such as updating legal frameworks, managing cross-border data sharing, and motivating developers to reduce bias were acknowledged. The event emphasized the necessity of diverse viewpoints and multi-dimensional dialogue for creating a fair and ethical AI framework for equitable global health.</description><subject>Artificial intelligence</subject><subject>Bias</subject><subject>Data transparency</subject><subject>Ethics</subject><subject>Knowledge management</subject><subject>Public health</subject><subject>Risk management</subject><subject>Trustworthiness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNykELgjAYgOERBEn5HwadhblNs2OEWXfvMm3mZLj89g3p3-ehH9DpPbzPhkRciDQpJOc7Ens_MsZ4fuJZJiJS1W5R8PS0huBxcYDDh14ATW86oyx9TKitNS89dZr2Dmg5B4OqtZpW1rWruGtlcTiQba-s1_Gve3K8lfX1nrzBzUF7bEYXYFpXw4tcslTKsxD_qS9FSDwG</recordid><startdate>20230910</startdate><enddate>20230910</enddate><creator>Qin, Hong</creator><creator>Kong, Jude</creator><creator>Ding, Wandi</creator><creator>Ahluwalia, Ramneek</creator><creator>Christo El Morr</creator><creator>Engin, Zeynep</creator><creator>Effoduh, Jake Okechukwu</creator><creator>Hwa, Rebecca</creator><creator>Guo, Serena Jingchuan</creator><creator>Seyyed-Kalantari, Laleh</creator><creator>Muyingo, Sylvia Kiwuwa</creator><creator>Moore, Candace Makeda</creator><creator>Parikh, Ravi</creator><creator>Schwartz, Reva</creator><creator>Zhu, Dongxiao</creator><creator>Wang, Xiaoqian</creator><creator>Zhang, Yiye</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230910</creationdate><title>Towards Trustworthy Artificial Intelligence for Equitable Global Health</title><author>Qin, Hong ; Kong, Jude ; Ding, Wandi ; Ahluwalia, Ramneek ; Christo El Morr ; Engin, Zeynep ; Effoduh, Jake Okechukwu ; Hwa, Rebecca ; Guo, Serena Jingchuan ; Seyyed-Kalantari, Laleh ; Muyingo, Sylvia Kiwuwa ; Moore, Candace Makeda ; Parikh, Ravi ; Schwartz, Reva ; Zhu, Dongxiao ; Wang, Xiaoqian ; Zhang, Yiye</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28640144933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial intelligence</topic><topic>Bias</topic><topic>Data transparency</topic><topic>Ethics</topic><topic>Knowledge management</topic><topic>Public health</topic><topic>Risk management</topic><topic>Trustworthiness</topic><toplevel>online_resources</toplevel><creatorcontrib>Qin, Hong</creatorcontrib><creatorcontrib>Kong, Jude</creatorcontrib><creatorcontrib>Ding, Wandi</creatorcontrib><creatorcontrib>Ahluwalia, Ramneek</creatorcontrib><creatorcontrib>Christo El Morr</creatorcontrib><creatorcontrib>Engin, Zeynep</creatorcontrib><creatorcontrib>Effoduh, Jake Okechukwu</creatorcontrib><creatorcontrib>Hwa, Rebecca</creatorcontrib><creatorcontrib>Guo, Serena Jingchuan</creatorcontrib><creatorcontrib>Seyyed-Kalantari, Laleh</creatorcontrib><creatorcontrib>Muyingo, Sylvia Kiwuwa</creatorcontrib><creatorcontrib>Moore, Candace Makeda</creatorcontrib><creatorcontrib>Parikh, Ravi</creatorcontrib><creatorcontrib>Schwartz, Reva</creatorcontrib><creatorcontrib>Zhu, Dongxiao</creatorcontrib><creatorcontrib>Wang, Xiaoqian</creatorcontrib><creatorcontrib>Zhang, Yiye</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qin, Hong</au><au>Kong, Jude</au><au>Ding, Wandi</au><au>Ahluwalia, Ramneek</au><au>Christo El Morr</au><au>Engin, Zeynep</au><au>Effoduh, Jake Okechukwu</au><au>Hwa, Rebecca</au><au>Guo, Serena Jingchuan</au><au>Seyyed-Kalantari, Laleh</au><au>Muyingo, Sylvia Kiwuwa</au><au>Moore, Candace Makeda</au><au>Parikh, Ravi</au><au>Schwartz, Reva</au><au>Zhu, Dongxiao</au><au>Wang, Xiaoqian</au><au>Zhang, Yiye</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Trustworthy Artificial Intelligence for Equitable Global Health</atitle><jtitle>arXiv.org</jtitle><date>2023-09-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Artificial intelligence (AI) can potentially transform global health, but algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI entails the intentional design to ensure equity and mitigate potential biases. To advance trustworthy AI in global health, we convened a workshop on Fairness in Machine Intelligence for Global Health (FairMI4GH). The event brought together a global mix of experts from various disciplines, community health practitioners, policymakers, and more. Topics covered included managing AI bias in socio-technical systems, AI's potential impacts on global health, and balancing data privacy with transparency. Panel discussions examined the cultural, political, and ethical dimensions of AI in global health. FairMI4GH aimed to stimulate dialogue, facilitate knowledge transfer, and spark innovative solutions. Drawing from NIST's AI Risk Management Framework, it provided suggestions for handling AI risks and biases. The need to mitigate data biases from the research design stage, adopt a human-centered approach, and advocate for AI transparency was recognized. Challenges such as updating legal frameworks, managing cross-border data sharing, and motivating developers to reduce bias were acknowledged. The event emphasized the necessity of diverse viewpoints and multi-dimensional dialogue for creating a fair and ethical AI framework for equitable global health.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2864014493 |
source | Free E- Journals |
subjects | Artificial intelligence Bias Data transparency Ethics Knowledge management Public health Risk management Trustworthiness |
title | Towards Trustworthy Artificial Intelligence for Equitable Global Health |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T18%3A21%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Trustworthy%20Artificial%20Intelligence%20for%20Equitable%20Global%20Health&rft.jtitle=arXiv.org&rft.au=Qin,%20Hong&rft.date=2023-09-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2864014493%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2864014493&rft_id=info:pmid/&rfr_iscdi=true |