Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare
The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are ke...
Gespeichert in:
Veröffentlicht in: | Risk analysis 2024-04, Vol.44 (4), p.939-957 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 957 |
---|---|
container_issue | 4 |
container_start_page | 939 |
container_title | Risk analysis |
container_volume | 44 |
creator | Kerstan, Sophie Bienefeld, Nadine Grote, Gudela |
description | The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted. |
doi_str_mv | 10.1111/risa.14216 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2866379288</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2866379288</sourcerecordid><originalsourceid>FETCH-LOGICAL-c310t-e880b29f41f1d2b5a8f6cbb6c7f06f672b796436c4a978ab19c4162aea8bc64d3</originalsourceid><addsrcrecordid>eNpdkc9O3DAQxq2qVVloL32AylIvqFLAf7J2fKrQigISUi_lHI2dMRtI7NR2QDwDL90sCz10LiPN_GbmG32EfOHshC9xmvoMJ7wWXL0jK76WplJG1O_JigktqlpKcUAOc75jjDO21h_JgdRaCKPqFXnebGPMfbil23mEQOMDJnp2RbvoSkz5B72Mj9TFcYIEpX9AWtKcC4Wco-uXSgyZQujofYiPA3a3SBMOUBYu0kXX_UvTYkDfFzphcjjth6LfnekD3SIMZesg4SfywcOQ8fNrPiI3P89_by6r618XV5uz68pJzkqFTcOsML7mnnfCrqHxylmrnPZMeaWF1ctrUrkajG7AcuNqrgQgNNapupNH5Hi_d0rxz4y5tGOfHQ4DBIxzbkWjlNRGNM2CfvsPvYtzCou6VjIpjOFc76jve8qlmHNC306pHyE9tZy1O4vanUXti0UL_PV15WxH7P6hb57Iv0zqji8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3032991178</pqid></control><display><type>article</type><title>Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare</title><source>MEDLINE</source><source>Wiley Online Library Journals Frontfile Complete</source><creator>Kerstan, Sophie ; Bienefeld, Nadine ; Grote, Gudela</creator><creatorcontrib>Kerstan, Sophie ; Bienefeld, Nadine ; Grote, Gudela</creatorcontrib><description>The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.</description><identifier>ISSN: 0272-4332</identifier><identifier>ISSN: 1539-6924</identifier><identifier>EISSN: 1539-6924</identifier><identifier>DOI: 10.1111/risa.14216</identifier><identifier>PMID: 37722964</identifier><language>eng</language><publisher>United States: Blackwell Publishing Ltd</publisher><subject>Artificial Intelligence ; Associations ; Cognition ; Concept Formation ; Decision making ; Decision theory ; Health care ; Heuristic ; Humans ; Medical personnel ; Optimization ; Perceptions ; Physicians ; Public opinion ; Risk assessment ; Risk perception ; Trust</subject><ispartof>Risk analysis, 2024-04, Vol.44 (4), p.939-957</ispartof><rights>2023 The Authors. Risk Analysis published by Wiley Periodicals LLC on behalf of Society for Risk Analysis.</rights><rights>2023. This article is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c310t-e880b29f41f1d2b5a8f6cbb6c7f06f672b796436c4a978ab19c4162aea8bc64d3</cites><orcidid>0000-0003-4200-8695 ; 0000-0002-5581-0452 ; 0000-0002-0805-6485</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37722964$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Kerstan, Sophie</creatorcontrib><creatorcontrib>Bienefeld, Nadine</creatorcontrib><creatorcontrib>Grote, Gudela</creatorcontrib><title>Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare</title><title>Risk analysis</title><addtitle>Risk Anal</addtitle><description>The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.</description><subject>Artificial Intelligence</subject><subject>Associations</subject><subject>Cognition</subject><subject>Concept Formation</subject><subject>Decision making</subject><subject>Decision theory</subject><subject>Health care</subject><subject>Heuristic</subject><subject>Humans</subject><subject>Medical personnel</subject><subject>Optimization</subject><subject>Perceptions</subject><subject>Physicians</subject><subject>Public opinion</subject><subject>Risk assessment</subject><subject>Risk perception</subject><subject>Trust</subject><issn>0272-4332</issn><issn>1539-6924</issn><issn>1539-6924</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpdkc9O3DAQxq2qVVloL32AylIvqFLAf7J2fKrQigISUi_lHI2dMRtI7NR2QDwDL90sCz10LiPN_GbmG32EfOHshC9xmvoMJ7wWXL0jK76WplJG1O_JigktqlpKcUAOc75jjDO21h_JgdRaCKPqFXnebGPMfbil23mEQOMDJnp2RbvoSkz5B72Mj9TFcYIEpX9AWtKcC4Wco-uXSgyZQujofYiPA3a3SBMOUBYu0kXX_UvTYkDfFzphcjjth6LfnekD3SIMZesg4SfywcOQ8fNrPiI3P89_by6r618XV5uz68pJzkqFTcOsML7mnnfCrqHxylmrnPZMeaWF1ctrUrkajG7AcuNqrgQgNNapupNH5Hi_d0rxz4y5tGOfHQ4DBIxzbkWjlNRGNM2CfvsPvYtzCou6VjIpjOFc76jve8qlmHNC306pHyE9tZy1O4vanUXti0UL_PV15WxH7P6hb57Iv0zqji8</recordid><startdate>202404</startdate><enddate>202404</enddate><creator>Kerstan, Sophie</creator><creator>Bienefeld, Nadine</creator><creator>Grote, Gudela</creator><general>Blackwell Publishing Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7ST</scope><scope>7U7</scope><scope>7U9</scope><scope>8BJ</scope><scope>8FD</scope><scope>C1K</scope><scope>FQK</scope><scope>FR3</scope><scope>H94</scope><scope>JBE</scope><scope>JQ2</scope><scope>KR7</scope><scope>M7N</scope><scope>SOI</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-4200-8695</orcidid><orcidid>https://orcid.org/0000-0002-5581-0452</orcidid><orcidid>https://orcid.org/0000-0002-0805-6485</orcidid></search><sort><creationdate>202404</creationdate><title>Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare</title><author>Kerstan, Sophie ; Bienefeld, Nadine ; Grote, Gudela</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c310t-e880b29f41f1d2b5a8f6cbb6c7f06f672b796436c4a978ab19c4162aea8bc64d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Associations</topic><topic>Cognition</topic><topic>Concept Formation</topic><topic>Decision making</topic><topic>Decision theory</topic><topic>Health care</topic><topic>Heuristic</topic><topic>Humans</topic><topic>Medical personnel</topic><topic>Optimization</topic><topic>Perceptions</topic><topic>Physicians</topic><topic>Public opinion</topic><topic>Risk assessment</topic><topic>Risk perception</topic><topic>Trust</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kerstan, Sophie</creatorcontrib><creatorcontrib>Bienefeld, Nadine</creatorcontrib><creatorcontrib>Grote, Gudela</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Environment Abstracts</collection><collection>Toxicology Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>International Bibliography of the Social Sciences</collection><collection>Engineering Research Database</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>International Bibliography of the Social Sciences</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Environment Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Risk analysis</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kerstan, Sophie</au><au>Bienefeld, Nadine</au><au>Grote, Gudela</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare</atitle><jtitle>Risk analysis</jtitle><addtitle>Risk Anal</addtitle><date>2024-04</date><risdate>2024</risdate><volume>44</volume><issue>4</issue><spage>939</spage><epage>957</epage><pages>939-957</pages><issn>0272-4332</issn><issn>1539-6924</issn><eissn>1539-6924</eissn><abstract>The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.</abstract><cop>United States</cop><pub>Blackwell Publishing Ltd</pub><pmid>37722964</pmid><doi>10.1111/risa.14216</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0003-4200-8695</orcidid><orcidid>https://orcid.org/0000-0002-5581-0452</orcidid><orcidid>https://orcid.org/0000-0002-0805-6485</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0272-4332 |
ispartof | Risk analysis, 2024-04, Vol.44 (4), p.939-957 |
issn | 0272-4332 1539-6924 1539-6924 |
language | eng |
recordid | cdi_proquest_miscellaneous_2866379288 |
source | MEDLINE; Wiley Online Library Journals Frontfile Complete |
subjects | Artificial Intelligence Associations Cognition Concept Formation Decision making Decision theory Health care Heuristic Humans Medical personnel Optimization Perceptions Physicians Public opinion Risk assessment Risk perception Trust |
title | Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T16%3A36%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Choosing%20human%20over%20AI%20doctors?%20How%20comparative%20trust%20associations%20and%20knowledge%20relate%20to%20risk%20and%20benefit%20perceptions%20of%20AI%20in%20healthcare&rft.jtitle=Risk%20analysis&rft.au=Kerstan,%20Sophie&rft.date=2024-04&rft.volume=44&rft.issue=4&rft.spage=939&rft.epage=957&rft.pages=939-957&rft.issn=0272-4332&rft.eissn=1539-6924&rft_id=info:doi/10.1111/risa.14216&rft_dat=%3Cproquest_cross%3E2866379288%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3032991178&rft_id=info:pmid/37722964&rfr_iscdi=true |