Interdisciplinary Expertise to Advance Equitable Explainable AI

The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model perfo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bennett, Chloe R, Cole-Lewis, Heather, Farquhar, Stephanie, Haamel, Naama, Babenko, Boris, Lang, Oran, Fleck, Mat, Traynis, Ilana, Lau, Charles, Horn, Ivor, Lyles, Courtney
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bennett, Chloe R
Cole-Lewis, Heather
Farquhar, Stephanie
Haamel, Naama
Babenko, Boris
Lang, Oran
Fleck, Mat
Traynis, Ilana
Lau, Charles
Horn, Ivor
Lyles, Courtney
description The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model performance to advance equity and reduce bias. However, there is an opportunity to also improve the explainability of AI by leveraging best practices of social epidemiology and health equity to help us develop hypotheses for associations found. In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review to discuss and critically assess AI model explanations from multiple perspectives and identify areas of bias and directions for future research. We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations which are historically and contextually informed. Interdisciplinary panel discussions can help reduce bias, identify potential confounders, and identify opportunities for additional research where there are gaps in the literature. In turn, these insights can suggest opportunities for AI model improvement.
doi_str_mv 10.48550/arxiv.2406.18563
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_18563</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_18563</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-da1d9568d211307c22ad85f0aeedcc82be09980c91518cd45e23eb253bee26283</originalsourceid><addsrcrecordid>eNotj0GLwjAUhHPZw-LuD9iT_QPtJi9NTE9SSlcLghfv5TV5QqDWmnaL_nutepqBGYb5GPsRPEmNUvwXw9VPCaRcJ8IoLT_ZuupGCs4P1vet7zDcovLaUxj9QNF4jnI3YWcpKi__fsSmpTlu8dGcfV59sY8jtgN9v3XBDn_lodjGu_2mKvJdjHolY4fCZUobB0JIvrIA6Iw6ciRy1hpoiGeZ4TYTShjrUkUgqQElGyLQYOSCLV-zT4K6D_70uFrPJPWTRN4B8zhDmg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Interdisciplinary Expertise to Advance Equitable Explainable AI</title><source>arXiv.org</source><creator>Bennett, Chloe R ; Cole-Lewis, Heather ; Farquhar, Stephanie ; Haamel, Naama ; Babenko, Boris ; Lang, Oran ; Fleck, Mat ; Traynis, Ilana ; Lau, Charles ; Horn, Ivor ; Lyles, Courtney</creator><creatorcontrib>Bennett, Chloe R ; Cole-Lewis, Heather ; Farquhar, Stephanie ; Haamel, Naama ; Babenko, Boris ; Lang, Oran ; Fleck, Mat ; Traynis, Ilana ; Lau, Charles ; Horn, Ivor ; Lyles, Courtney</creatorcontrib><description>The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model performance to advance equity and reduce bias. However, there is an opportunity to also improve the explainability of AI by leveraging best practices of social epidemiology and health equity to help us develop hypotheses for associations found. In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review to discuss and critically assess AI model explanations from multiple perspectives and identify areas of bias and directions for future research. We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations which are historically and contextually informed. Interdisciplinary panel discussions can help reduce bias, identify potential confounders, and identify opportunities for additional research where there are gaps in the literature. In turn, these insights can suggest opportunities for AI model improvement.</description><identifier>DOI: 10.48550/arxiv.2406.18563</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Computers and Society ; Computer Science - Learning</subject><creationdate>2024-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.18563$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.18563$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bennett, Chloe R</creatorcontrib><creatorcontrib>Cole-Lewis, Heather</creatorcontrib><creatorcontrib>Farquhar, Stephanie</creatorcontrib><creatorcontrib>Haamel, Naama</creatorcontrib><creatorcontrib>Babenko, Boris</creatorcontrib><creatorcontrib>Lang, Oran</creatorcontrib><creatorcontrib>Fleck, Mat</creatorcontrib><creatorcontrib>Traynis, Ilana</creatorcontrib><creatorcontrib>Lau, Charles</creatorcontrib><creatorcontrib>Horn, Ivor</creatorcontrib><creatorcontrib>Lyles, Courtney</creatorcontrib><title>Interdisciplinary Expertise to Advance Equitable Explainable AI</title><description>The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model performance to advance equity and reduce bias. However, there is an opportunity to also improve the explainability of AI by leveraging best practices of social epidemiology and health equity to help us develop hypotheses for associations found. In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review to discuss and critically assess AI model explanations from multiple perspectives and identify areas of bias and directions for future research. We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations which are historically and contextually informed. Interdisciplinary panel discussions can help reduce bias, identify potential confounders, and identify opportunities for additional research where there are gaps in the literature. In turn, these insights can suggest opportunities for AI model improvement.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0GLwjAUhHPZw-LuD9iT_QPtJi9NTE9SSlcLghfv5TV5QqDWmnaL_nutepqBGYb5GPsRPEmNUvwXw9VPCaRcJ8IoLT_ZuupGCs4P1vet7zDcovLaUxj9QNF4jnI3YWcpKi__fsSmpTlu8dGcfV59sY8jtgN9v3XBDn_lodjGu_2mKvJdjHolY4fCZUobB0JIvrIA6Iw6ciRy1hpoiGeZ4TYTShjrUkUgqQElGyLQYOSCLV-zT4K6D_70uFrPJPWTRN4B8zhDmg</recordid><startdate>20240529</startdate><enddate>20240529</enddate><creator>Bennett, Chloe R</creator><creator>Cole-Lewis, Heather</creator><creator>Farquhar, Stephanie</creator><creator>Haamel, Naama</creator><creator>Babenko, Boris</creator><creator>Lang, Oran</creator><creator>Fleck, Mat</creator><creator>Traynis, Ilana</creator><creator>Lau, Charles</creator><creator>Horn, Ivor</creator><creator>Lyles, Courtney</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240529</creationdate><title>Interdisciplinary Expertise to Advance Equitable Explainable AI</title><author>Bennett, Chloe R ; Cole-Lewis, Heather ; Farquhar, Stephanie ; Haamel, Naama ; Babenko, Boris ; Lang, Oran ; Fleck, Mat ; Traynis, Ilana ; Lau, Charles ; Horn, Ivor ; Lyles, Courtney</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-da1d9568d211307c22ad85f0aeedcc82be09980c91518cd45e23eb253bee26283</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bennett, Chloe R</creatorcontrib><creatorcontrib>Cole-Lewis, Heather</creatorcontrib><creatorcontrib>Farquhar, Stephanie</creatorcontrib><creatorcontrib>Haamel, Naama</creatorcontrib><creatorcontrib>Babenko, Boris</creatorcontrib><creatorcontrib>Lang, Oran</creatorcontrib><creatorcontrib>Fleck, Mat</creatorcontrib><creatorcontrib>Traynis, Ilana</creatorcontrib><creatorcontrib>Lau, Charles</creatorcontrib><creatorcontrib>Horn, Ivor</creatorcontrib><creatorcontrib>Lyles, Courtney</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bennett, Chloe R</au><au>Cole-Lewis, Heather</au><au>Farquhar, Stephanie</au><au>Haamel, Naama</au><au>Babenko, Boris</au><au>Lang, Oran</au><au>Fleck, Mat</au><au>Traynis, Ilana</au><au>Lau, Charles</au><au>Horn, Ivor</au><au>Lyles, Courtney</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interdisciplinary Expertise to Advance Equitable Explainable AI</atitle><date>2024-05-29</date><risdate>2024</risdate><abstract>The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model performance to advance equity and reduce bias. However, there is an opportunity to also improve the explainability of AI by leveraging best practices of social epidemiology and health equity to help us develop hypotheses for associations found. In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review to discuss and critically assess AI model explanations from multiple perspectives and identify areas of bias and directions for future research. We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations which are historically and contextually informed. Interdisciplinary panel discussions can help reduce bias, identify potential confounders, and identify opportunities for additional research where there are gaps in the literature. In turn, these insights can suggest opportunities for AI model improvement.</abstract><doi>10.48550/arxiv.2406.18563</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.18563
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_18563
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Computers and Society
Computer Science - Learning
title Interdisciplinary Expertise to Advance Equitable Explainable AI
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T10%3A45%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interdisciplinary%20Expertise%20to%20Advance%20Equitable%20Explainable%20AI&rft.au=Bennett,%20Chloe%20R&rft.date=2024-05-29&rft_id=info:doi/10.48550/arxiv.2406.18563&rft_dat=%3Carxiv_GOX%3E2406_18563%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true