When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions

Trust is a critical issue in Human Robot Interactions as it is the core of human desire to accept and use a non human agent. Theory of Mind has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mou, Wenxuan, Ruocco, Martina, Zanatto, Debora, Cangelosi, Angelo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mou, Wenxuan
Ruocco, Martina
Zanatto, Debora
Cangelosi, Angelo
description Trust is a critical issue in Human Robot Interactions as it is the core of human desire to accept and use a non human agent. Theory of Mind has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and Theory of Mind are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take Theory of Mind of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the Theory of Mind abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot that was presented having either low level Theory of Mind or high level Theory of Mind. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high level of Theory of Mind abilities were trusted more than the robots presented with low level Theory of Mind skills.
doi_str_mv 10.48550/arxiv.2101.10819
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2101_10819</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2101_10819</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-6112fd217ada62cab9d1ef89e2c40a623f70fa2cf582baab77904d2841b725033</originalsourceid><addsrcrecordid>eNo1j89KAzEYxHPxINUH8OT3Arsm2T9JTlKK2kKLoAvFgyxfNgkNtIlkd8W-veuqp2FmmIEfITeM5qWsKnqH6ct_5pxRljMqmbok7_uDDbCP49HAWxyhSWM_AMJL1HG4hyW8DqM5Qwz_TTDQHGxMU-Zg5yfrA6zHE4Zs3sAmDDZhN_gY-ity4fDY2-s_XZDm8aFZrbPt89NmtdxmWAuV1YxxZzgTaLDmHWplmHVSWd6VdEoKJ6hD3rlKco2ohVC0NFyWTAte0aJYkNvf25mv_Uj-hOnc_nC2M2fxDbYTTOw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions</title><source>arXiv.org</source><creator>Mou, Wenxuan ; Ruocco, Martina ; Zanatto, Debora ; Cangelosi, Angelo</creator><creatorcontrib>Mou, Wenxuan ; Ruocco, Martina ; Zanatto, Debora ; Cangelosi, Angelo</creatorcontrib><description>Trust is a critical issue in Human Robot Interactions as it is the core of human desire to accept and use a non human agent. Theory of Mind has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and Theory of Mind are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take Theory of Mind of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the Theory of Mind abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot that was presented having either low level Theory of Mind or high level Theory of Mind. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high level of Theory of Mind abilities were trusted more than the robots presented with low level Theory of Mind skills.</description><identifier>DOI: 10.48550/arxiv.2101.10819</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2021-01</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2101.10819$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2101.10819$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mou, Wenxuan</creatorcontrib><creatorcontrib>Ruocco, Martina</creatorcontrib><creatorcontrib>Zanatto, Debora</creatorcontrib><creatorcontrib>Cangelosi, Angelo</creatorcontrib><title>When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions</title><description>Trust is a critical issue in Human Robot Interactions as it is the core of human desire to accept and use a non human agent. Theory of Mind has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and Theory of Mind are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take Theory of Mind of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the Theory of Mind abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot that was presented having either low level Theory of Mind or high level Theory of Mind. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high level of Theory of Mind abilities were trusted more than the robots presented with low level Theory of Mind skills.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j89KAzEYxHPxINUH8OT3Arsm2T9JTlKK2kKLoAvFgyxfNgkNtIlkd8W-veuqp2FmmIEfITeM5qWsKnqH6ct_5pxRljMqmbok7_uDDbCP49HAWxyhSWM_AMJL1HG4hyW8DqM5Qwz_TTDQHGxMU-Zg5yfrA6zHE4Zs3sAmDDZhN_gY-ity4fDY2-s_XZDm8aFZrbPt89NmtdxmWAuV1YxxZzgTaLDmHWplmHVSWd6VdEoKJ6hD3rlKco2ohVC0NFyWTAte0aJYkNvf25mv_Uj-hOnc_nC2M2fxDbYTTOw</recordid><startdate>20210126</startdate><enddate>20210126</enddate><creator>Mou, Wenxuan</creator><creator>Ruocco, Martina</creator><creator>Zanatto, Debora</creator><creator>Cangelosi, Angelo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210126</creationdate><title>When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions</title><author>Mou, Wenxuan ; Ruocco, Martina ; Zanatto, Debora ; Cangelosi, Angelo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-6112fd217ada62cab9d1ef89e2c40a623f70fa2cf582baab77904d2841b725033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Mou, Wenxuan</creatorcontrib><creatorcontrib>Ruocco, Martina</creatorcontrib><creatorcontrib>Zanatto, Debora</creatorcontrib><creatorcontrib>Cangelosi, Angelo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mou, Wenxuan</au><au>Ruocco, Martina</au><au>Zanatto, Debora</au><au>Cangelosi, Angelo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions</atitle><date>2021-01-26</date><risdate>2021</risdate><abstract>Trust is a critical issue in Human Robot Interactions as it is the core of human desire to accept and use a non human agent. Theory of Mind has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and Theory of Mind are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take Theory of Mind of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the Theory of Mind abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot that was presented having either low level Theory of Mind or high level Theory of Mind. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high level of Theory of Mind abilities were trusted more than the robots presented with low level Theory of Mind skills.</abstract><doi>10.48550/arxiv.2101.10819</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2101.10819
ispartof
issn
language eng
recordid cdi_arxiv_primary_2101_10819
source arXiv.org
subjects Computer Science - Robotics
title When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T07%3A14%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=When%20Would%20You%20Trust%20a%20Robot?%20A%20Study%20on%20Trust%20and%20Theory%20of%20Mind%20in%20Human-Robot%20Interactions&rft.au=Mou,%20Wenxuan&rft.date=2021-01-26&rft_id=info:doi/10.48550/arxiv.2101.10819&rft_dat=%3Carxiv_GOX%3E2101_10819%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true