Committing to Interdependence: Implications from Game Theory for Human-Robot Trust
Human-robot interaction and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. Human-robot interaction has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory has concentrated o...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Razin, Yosef S Feigh, Karen M |
description | Human-robot interaction and game theory have developed distinct theories of
trust for over three decades in relative isolation from one another.
Human-robot interaction has focused on the underlying dimensions, layers,
correlates, and antecedents of trust models, while game theory has concentrated
on the psychology and strategies behind singular trust decisions. Both fields
have grappled to understand over-trust and trust calibration, as well as how to
measure trust expectations, risk, and vulnerability. This paper presents
initial steps in closing the gap between these fields. Using insights and
experimental findings from interdependence theory and social psychology, this
work starts by analyzing a large game theory competition data set to
demonstrate that the strongest predictors for a wide variety of human-human
trust interactions are the interdependence-derived variables for commitment and
trust that we have developed. It then presents a second study with human
subject results for more realistic trust scenarios, involving both human-human
and human-machine trust. In both the competition data and our experimental
data, we demonstrate that the interdependence metrics better capture social
`overtrust' than either rational or normative psychological reasoning, as
proposed by game theory. This work further explores how interdependence
theory--with its focus on commitment, coercion, and cooperation--addresses many
of the proposed underlying constructs and antecedents within human-robot trust,
shedding new light on key similarities and differences that arise when robots
replace humans in trust interactions. |
doi_str_mv | 10.48550/arxiv.2111.06939 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2111_06939</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2111_06939</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-ed63fb335eafc8577391a7832093f8488c664bf734996557b47d85d29edf35c13</originalsourceid><addsrcrecordid>eNotz09LwzAYgPFcPMj0A3gyX6C16dv88yZFt8FAGL2XtHmjgSUpaSbu24vT03N74EfIA2vqTnHePJn87b_qljFWN0KDviXHPoXgS_Hxg5ZE97FgtrhgtBhnfKb7sJz8bIpPcaUup0C3JiAdPjHlC3Up0905mFgd05QKHfJ5LXfkxpnTivf_3ZDh7XXod9XhfbvvXw6VEVJXaAW4CYCjcbPiUoJmRipoGw1OdUrNQnSTk9BpLTiXUyet4rbVaB3wmcGGPP5tr6hxyT6YfBl_ceMVBz_5FUnk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Committing to Interdependence: Implications from Game Theory for Human-Robot Trust</title><source>arXiv.org</source><creator>Razin, Yosef S ; Feigh, Karen M</creator><creatorcontrib>Razin, Yosef S ; Feigh, Karen M</creatorcontrib><description>Human-robot interaction and game theory have developed distinct theories of
trust for over three decades in relative isolation from one another.
Human-robot interaction has focused on the underlying dimensions, layers,
correlates, and antecedents of trust models, while game theory has concentrated
on the psychology and strategies behind singular trust decisions. Both fields
have grappled to understand over-trust and trust calibration, as well as how to
measure trust expectations, risk, and vulnerability. This paper presents
initial steps in closing the gap between these fields. Using insights and
experimental findings from interdependence theory and social psychology, this
work starts by analyzing a large game theory competition data set to
demonstrate that the strongest predictors for a wide variety of human-human
trust interactions are the interdependence-derived variables for commitment and
trust that we have developed. It then presents a second study with human
subject results for more realistic trust scenarios, involving both human-human
and human-machine trust. In both the competition data and our experimental
data, we demonstrate that the interdependence metrics better capture social
`overtrust' than either rational or normative psychological reasoning, as
proposed by game theory. This work further explores how interdependence
theory--with its focus on commitment, coercion, and cooperation--addresses many
of the proposed underlying constructs and antecedents within human-robot trust,
shedding new light on key similarities and differences that arise when robots
replace humans in trust interactions.</description><identifier>DOI: 10.48550/arxiv.2111.06939</identifier><language>eng</language><subject>Computer Science - Computer Science and Game Theory ; Computer Science - Human-Computer Interaction ; Computer Science - Robotics</subject><creationdate>2021-11</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2111.06939$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.06939$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Razin, Yosef S</creatorcontrib><creatorcontrib>Feigh, Karen M</creatorcontrib><title>Committing to Interdependence: Implications from Game Theory for Human-Robot Trust</title><description>Human-robot interaction and game theory have developed distinct theories of
trust for over three decades in relative isolation from one another.
Human-robot interaction has focused on the underlying dimensions, layers,
correlates, and antecedents of trust models, while game theory has concentrated
on the psychology and strategies behind singular trust decisions. Both fields
have grappled to understand over-trust and trust calibration, as well as how to
measure trust expectations, risk, and vulnerability. This paper presents
initial steps in closing the gap between these fields. Using insights and
experimental findings from interdependence theory and social psychology, this
work starts by analyzing a large game theory competition data set to
demonstrate that the strongest predictors for a wide variety of human-human
trust interactions are the interdependence-derived variables for commitment and
trust that we have developed. It then presents a second study with human
subject results for more realistic trust scenarios, involving both human-human
and human-machine trust. In both the competition data and our experimental
data, we demonstrate that the interdependence metrics better capture social
`overtrust' than either rational or normative psychological reasoning, as
proposed by game theory. This work further explores how interdependence
theory--with its focus on commitment, coercion, and cooperation--addresses many
of the proposed underlying constructs and antecedents within human-robot trust,
shedding new light on key similarities and differences that arise when robots
replace humans in trust interactions.</description><subject>Computer Science - Computer Science and Game Theory</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz09LwzAYgPFcPMj0A3gyX6C16dv88yZFt8FAGL2XtHmjgSUpaSbu24vT03N74EfIA2vqTnHePJn87b_qljFWN0KDviXHPoXgS_Hxg5ZE97FgtrhgtBhnfKb7sJz8bIpPcaUup0C3JiAdPjHlC3Up0905mFgd05QKHfJ5LXfkxpnTivf_3ZDh7XXod9XhfbvvXw6VEVJXaAW4CYCjcbPiUoJmRipoGw1OdUrNQnSTk9BpLTiXUyet4rbVaB3wmcGGPP5tr6hxyT6YfBl_ceMVBz_5FUnk</recordid><startdate>20211112</startdate><enddate>20211112</enddate><creator>Razin, Yosef S</creator><creator>Feigh, Karen M</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211112</creationdate><title>Committing to Interdependence: Implications from Game Theory for Human-Robot Trust</title><author>Razin, Yosef S ; Feigh, Karen M</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-ed63fb335eafc8577391a7832093f8488c664bf734996557b47d85d29edf35c13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Science and Game Theory</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Razin, Yosef S</creatorcontrib><creatorcontrib>Feigh, Karen M</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Razin, Yosef S</au><au>Feigh, Karen M</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Committing to Interdependence: Implications from Game Theory for Human-Robot Trust</atitle><date>2021-11-12</date><risdate>2021</risdate><abstract>Human-robot interaction and game theory have developed distinct theories of
trust for over three decades in relative isolation from one another.
Human-robot interaction has focused on the underlying dimensions, layers,
correlates, and antecedents of trust models, while game theory has concentrated
on the psychology and strategies behind singular trust decisions. Both fields
have grappled to understand over-trust and trust calibration, as well as how to
measure trust expectations, risk, and vulnerability. This paper presents
initial steps in closing the gap between these fields. Using insights and
experimental findings from interdependence theory and social psychology, this
work starts by analyzing a large game theory competition data set to
demonstrate that the strongest predictors for a wide variety of human-human
trust interactions are the interdependence-derived variables for commitment and
trust that we have developed. It then presents a second study with human
subject results for more realistic trust scenarios, involving both human-human
and human-machine trust. In both the competition data and our experimental
data, we demonstrate that the interdependence metrics better capture social
`overtrust' than either rational or normative psychological reasoning, as
proposed by game theory. This work further explores how interdependence
theory--with its focus on commitment, coercion, and cooperation--addresses many
of the proposed underlying constructs and antecedents within human-robot trust,
shedding new light on key similarities and differences that arise when robots
replace humans in trust interactions.</abstract><doi>10.48550/arxiv.2111.06939</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2111.06939 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2111_06939 |
source | arXiv.org |
subjects | Computer Science - Computer Science and Game Theory Computer Science - Human-Computer Interaction Computer Science - Robotics |
title | Committing to Interdependence: Implications from Game Theory for Human-Robot Trust |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T11%3A45%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Committing%20to%20Interdependence:%20Implications%20from%20Game%20Theory%20for%20Human-Robot%20Trust&rft.au=Razin,%20Yosef%20S&rft.date=2021-11-12&rft_id=info:doi/10.48550/arxiv.2111.06939&rft_dat=%3Carxiv_GOX%3E2111_06939%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |