Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-05
Hauptverfasser: Suresh, Harini, Lao, Natalie, Liccardi, Ilaria
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Suresh, Harini
Lao, Natalie
Liccardi, Ilaria
description ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.
doi_str_mv 10.48550/arxiv.2005.10960
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2005_10960</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406415218</sourcerecordid><originalsourceid>FETCH-LOGICAL-a528-f892ba3007e4de5826a5f9fcf391cb00ab0777d25160fe4c3aec0dd115f20ce53</originalsourceid><addsrcrecordid>eNotj8tOwzAURC0kJKrSD2CFJdYp13acBztUHq2UiE0X7KJb55q6tE6xEwR_Tx-sZjFHozmM3QiYpoXWcI_hx31PJYCeCigzuGAjqZRIilTKKzaJcQMAMsul1mrE3msX91s01PJlGGL_wGvCOATnP3i_Jr7wPQVLgbwh3lleo1k7T7wiDP4IOc_nww49fyLjout8UuPnobhmlxa3kSb_OWbLl-flbJ5Ub6-L2WOVoJZFYotSrlAB5JS2pAuZobalNVaVwqwAcAV5nrdSiwwspUYhGWhbIbSVYEirMbs9z560m31wOwy_zVG_OekfiLszsQ_d10CxbzbdEPzhUyNTyFKhpSjUH21kXVQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2406415218</pqid></control><display><type>article</type><title>Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Suresh, Harini ; Lao, Natalie ; Liccardi, Ilaria</creator><creatorcontrib>Suresh, Harini ; Lao, Natalie ; Liccardi, Ilaria</creatorcontrib><description>ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2005.10960</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Artificial Intelligence ; Computer Science - Human-Computer Interaction ; Computer Science - Learning ; Decision making ; Machine learning ; Trust</subject><ispartof>arXiv.org, 2020-05</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1145/3394231.3397922$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2005.10960$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Suresh, Harini</creatorcontrib><creatorcontrib>Lao, Natalie</creatorcontrib><creatorcontrib>Liccardi, Ilaria</creatorcontrib><title>Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making</title><title>arXiv.org</title><description>ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Learning</subject><subject>Decision making</subject><subject>Machine learning</subject><subject>Trust</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURC0kJKrSD2CFJdYp13acBztUHq2UiE0X7KJb55q6tE6xEwR_Tx-sZjFHozmM3QiYpoXWcI_hx31PJYCeCigzuGAjqZRIilTKKzaJcQMAMsul1mrE3msX91s01PJlGGL_wGvCOATnP3i_Jr7wPQVLgbwh3lleo1k7T7wiDP4IOc_nww49fyLjout8UuPnobhmlxa3kSb_OWbLl-flbJ5Ub6-L2WOVoJZFYotSrlAB5JS2pAuZobalNVaVwqwAcAV5nrdSiwwspUYhGWhbIbSVYEirMbs9z560m31wOwy_zVG_OekfiLszsQ_d10CxbzbdEPzhUyNTyFKhpSjUH21kXVQ</recordid><startdate>20200522</startdate><enddate>20200522</enddate><creator>Suresh, Harini</creator><creator>Lao, Natalie</creator><creator>Liccardi, Ilaria</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200522</creationdate><title>Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making</title><author>Suresh, Harini ; Lao, Natalie ; Liccardi, Ilaria</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a528-f892ba3007e4de5826a5f9fcf391cb00ab0777d25160fe4c3aec0dd115f20ce53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Learning</topic><topic>Decision making</topic><topic>Machine learning</topic><topic>Trust</topic><toplevel>online_resources</toplevel><creatorcontrib>Suresh, Harini</creatorcontrib><creatorcontrib>Lao, Natalie</creatorcontrib><creatorcontrib>Liccardi, Ilaria</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Suresh, Harini</au><au>Lao, Natalie</au><au>Liccardi, Ilaria</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making</atitle><jtitle>arXiv.org</jtitle><date>2020-05-22</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2005.10960</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-05
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2005_10960
source arXiv.org; Free E- Journals
subjects Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Computer Science - Learning
Decision making
Machine learning
Trust
title Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T05%3A42%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Misplaced%20Trust:%20Measuring%20the%20Interference%20of%20Machine%20Learning%20in%20Human%20Decision-Making&rft.jtitle=arXiv.org&rft.au=Suresh,%20Harini&rft.date=2020-05-22&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2005.10960&rft_dat=%3Cproquest_arxiv%3E2406415218%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2406415218&rft_id=info:pmid/&rfr_iscdi=true