Does Explainable Artificial Intelligence Improve Human Decision-Making?
Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-06 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Alufaisan, Yasmeen Marusich, Laura R Bakdash, Jonathan Z Zhou, Yan Kantarcioglu, Murat |
description | Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. Whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model are open questions. Using real datasets, we compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct versus incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2415576466</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2415576466</sourcerecordid><originalsourceid>FETCH-proquest_journals_24155764663</originalsourceid><addsrcrecordid>eNqNi70OgjAYABsTE4nyDk2cSaB_uBkjKAxu7qSSD_JhaZGC8fFl8AGcbri7FQkY50l0EIxtSOh9F8cxUymTkgfkmjnwNP8MRqPVDwP0NE7YYI3a0NJOYAy2YGugZT-M7g20mHttaQY1enQ2uukn2va4I-tGGw_hj1uyv-T3cxEt02sGP1Wdm0e7qIqJRMpUCaX4f9UX5cQ7bw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2415576466</pqid></control><display><type>article</type><title>Does Explainable Artificial Intelligence Improve Human Decision-Making?</title><source>Free E- Journals</source><creator>Alufaisan, Yasmeen ; Marusich, Laura R ; Bakdash, Jonathan Z ; Zhou, Yan ; Kantarcioglu, Murat</creator><creatorcontrib>Alufaisan, Yasmeen ; Marusich, Laura R ; Bakdash, Jonathan Z ; Zhou, Yan ; Kantarcioglu, Murat</creatorcontrib><description>Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. Whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model are open questions. Using real datasets, we compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct versus incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Decision making ; Explainable artificial intelligence ; Human performance</subject><ispartof>arXiv.org, 2020-06</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Alufaisan, Yasmeen</creatorcontrib><creatorcontrib>Marusich, Laura R</creatorcontrib><creatorcontrib>Bakdash, Jonathan Z</creatorcontrib><creatorcontrib>Zhou, Yan</creatorcontrib><creatorcontrib>Kantarcioglu, Murat</creatorcontrib><title>Does Explainable Artificial Intelligence Improve Human Decision-Making?</title><title>arXiv.org</title><description>Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. Whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model are open questions. Using real datasets, we compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct versus incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.</description><subject>Accuracy</subject><subject>Decision making</subject><subject>Explainable artificial intelligence</subject><subject>Human performance</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi70OgjAYABsTE4nyDk2cSaB_uBkjKAxu7qSSD_JhaZGC8fFl8AGcbri7FQkY50l0EIxtSOh9F8cxUymTkgfkmjnwNP8MRqPVDwP0NE7YYI3a0NJOYAy2YGugZT-M7g20mHttaQY1enQ2uukn2va4I-tGGw_hj1uyv-T3cxEt02sGP1Wdm0e7qIqJRMpUCaX4f9UX5cQ7bw</recordid><startdate>20200619</startdate><enddate>20200619</enddate><creator>Alufaisan, Yasmeen</creator><creator>Marusich, Laura R</creator><creator>Bakdash, Jonathan Z</creator><creator>Zhou, Yan</creator><creator>Kantarcioglu, Murat</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200619</creationdate><title>Does Explainable Artificial Intelligence Improve Human Decision-Making?</title><author>Alufaisan, Yasmeen ; Marusich, Laura R ; Bakdash, Jonathan Z ; Zhou, Yan ; Kantarcioglu, Murat</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24155764663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accuracy</topic><topic>Decision making</topic><topic>Explainable artificial intelligence</topic><topic>Human performance</topic><toplevel>online_resources</toplevel><creatorcontrib>Alufaisan, Yasmeen</creatorcontrib><creatorcontrib>Marusich, Laura R</creatorcontrib><creatorcontrib>Bakdash, Jonathan Z</creatorcontrib><creatorcontrib>Zhou, Yan</creatorcontrib><creatorcontrib>Kantarcioglu, Murat</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alufaisan, Yasmeen</au><au>Marusich, Laura R</au><au>Bakdash, Jonathan Z</au><au>Zhou, Yan</au><au>Kantarcioglu, Murat</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Does Explainable Artificial Intelligence Improve Human Decision-Making?</atitle><jtitle>arXiv.org</jtitle><date>2020-06-19</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Explainable AI provides insight into the "why" for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. Whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model are open questions. Using real datasets, we compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct versus incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2415576466 |
source | Free E- Journals |
subjects | Accuracy Decision making Explainable artificial intelligence Human performance |
title | Does Explainable Artificial Intelligence Improve Human Decision-Making? |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T11%3A42%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Does%20Explainable%20Artificial%20Intelligence%20Improve%20Human%20Decision-Making?&rft.jtitle=arXiv.org&rft.au=Alufaisan,%20Yasmeen&rft.date=2020-06-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2415576466%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2415576466&rft_id=info:pmid/&rfr_iscdi=true |