Models of Trust in Human Control of Swarms With Varied Levels of Autonomy

In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a h...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on human-machine systems 2020-06, Vol.50 (3), p.194-204
Hauptverfasser: Nam, Changjoo, Walker, Phillip, Li, Huao, Lewis, Michael, Sycara, Katia
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 204
container_issue 3
container_start_page 194
container_title IEEE transactions on human-machine systems
container_volume 50
creator Nam, Changjoo
Walker, Phillip
Li, Huao
Lewis, Michael
Sycara, Katia
description In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.
doi_str_mv 10.1109/THMS.2019.2896845
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8651317</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8651317</ieee_id><sourcerecordid>2406700830</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-5e46308c50982af91009bc449830f6929543595ac9cf8fbb4062fd8a912619103</originalsourceid><addsrcrecordid>eNo9kMFKAzEQhoMoWLQPIF4Cnrdmkk2aOZaittDioVWPId0muKW7qcmu0rd3l1ZzmcB8_wzzEXIHbATA8HE9W65GnAGOuEalc3lBBhyUzrhg8vLvzxGuyTClHeue5lJKPSDzZdi6faLB03VsU0PLms7aytZ0Guomhn3fWf3YWCX6UTaf9N3G0m3pwn2fY5O2CXWojrfkytt9csNzvSFvz0_r6SxbvL7Mp5NFVnAUTSZdrgTThWSoufUIjOGmyHPUgnmFHGUuJEpbYOG132xyprjfaovAFXS0uCEPp7mHGL5alxqzC22su5WGd_C4u030FJyoIoaUovPmEMvKxqMBZnppppdmemnmLK3L3J8ypXPun9dKgoCx-AUTimWC</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2406700830</pqid></control><display><type>article</type><title>Models of Trust in Human Control of Swarms With Varied Levels of Autonomy</title><source>IEEE Electronic Library (IEL)</source><creator>Nam, Changjoo ; Walker, Phillip ; Li, Huao ; Lewis, Michael ; Sycara, Katia</creator><creatorcontrib>Nam, Changjoo ; Walker, Phillip ; Li, Huao ; Lewis, Michael ; Sycara, Katia</creatorcontrib><description>In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.</description><identifier>ISSN: 2168-2291</identifier><identifier>EISSN: 2168-2305</identifier><identifier>DOI: 10.1109/THMS.2019.2896845</identifier><identifier>CODEN: ITHSA6</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Automation ; Autonomy ; Computational modeling ; Decision analysis ; Human–robot interaction ; human–swarm interaction ; Machine learning ; Markov analysis ; Markov processes ; multirobot systems ; Physical properties ; Predictive models ; Robot control ; Robot kinematics ; Robot sensing systems ; Search algorithms ; Supervisory control ; swarm robotics ; Task analysis ; trust</subject><ispartof>IEEE transactions on human-machine systems, 2020-06, Vol.50 (3), p.194-204</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-5e46308c50982af91009bc449830f6929543595ac9cf8fbb4062fd8a912619103</citedby><cites>FETCH-LOGICAL-c293t-5e46308c50982af91009bc449830f6929543595ac9cf8fbb4062fd8a912619103</cites><orcidid>0000-0002-9169-0785 ; 0000-0002-1013-9482 ; 0000-0001-7823-5211</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8651317$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8651317$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Nam, Changjoo</creatorcontrib><creatorcontrib>Walker, Phillip</creatorcontrib><creatorcontrib>Li, Huao</creatorcontrib><creatorcontrib>Lewis, Michael</creatorcontrib><creatorcontrib>Sycara, Katia</creatorcontrib><title>Models of Trust in Human Control of Swarms With Varied Levels of Autonomy</title><title>IEEE transactions on human-machine systems</title><addtitle>THMS</addtitle><description>In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.</description><subject>Algorithms</subject><subject>Automation</subject><subject>Autonomy</subject><subject>Computational modeling</subject><subject>Decision analysis</subject><subject>Human–robot interaction</subject><subject>human–swarm interaction</subject><subject>Machine learning</subject><subject>Markov analysis</subject><subject>Markov processes</subject><subject>multirobot systems</subject><subject>Physical properties</subject><subject>Predictive models</subject><subject>Robot control</subject><subject>Robot kinematics</subject><subject>Robot sensing systems</subject><subject>Search algorithms</subject><subject>Supervisory control</subject><subject>swarm robotics</subject><subject>Task analysis</subject><subject>trust</subject><issn>2168-2291</issn><issn>2168-2305</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMFKAzEQhoMoWLQPIF4Cnrdmkk2aOZaittDioVWPId0muKW7qcmu0rd3l1ZzmcB8_wzzEXIHbATA8HE9W65GnAGOuEalc3lBBhyUzrhg8vLvzxGuyTClHeue5lJKPSDzZdi6faLB03VsU0PLms7aytZ0Guomhn3fWf3YWCX6UTaf9N3G0m3pwn2fY5O2CXWojrfkytt9csNzvSFvz0_r6SxbvL7Mp5NFVnAUTSZdrgTThWSoufUIjOGmyHPUgnmFHGUuJEpbYOG132xyprjfaovAFXS0uCEPp7mHGL5alxqzC22su5WGd_C4u030FJyoIoaUovPmEMvKxqMBZnppppdmemnmLK3L3J8ypXPun9dKgoCx-AUTimWC</recordid><startdate>202006</startdate><enddate>202006</enddate><creator>Nam, Changjoo</creator><creator>Walker, Phillip</creator><creator>Li, Huao</creator><creator>Lewis, Michael</creator><creator>Sycara, Katia</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-9169-0785</orcidid><orcidid>https://orcid.org/0000-0002-1013-9482</orcidid><orcidid>https://orcid.org/0000-0001-7823-5211</orcidid></search><sort><creationdate>202006</creationdate><title>Models of Trust in Human Control of Swarms With Varied Levels of Autonomy</title><author>Nam, Changjoo ; Walker, Phillip ; Li, Huao ; Lewis, Michael ; Sycara, Katia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-5e46308c50982af91009bc449830f6929543595ac9cf8fbb4062fd8a912619103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Automation</topic><topic>Autonomy</topic><topic>Computational modeling</topic><topic>Decision analysis</topic><topic>Human–robot interaction</topic><topic>human–swarm interaction</topic><topic>Machine learning</topic><topic>Markov analysis</topic><topic>Markov processes</topic><topic>multirobot systems</topic><topic>Physical properties</topic><topic>Predictive models</topic><topic>Robot control</topic><topic>Robot kinematics</topic><topic>Robot sensing systems</topic><topic>Search algorithms</topic><topic>Supervisory control</topic><topic>swarm robotics</topic><topic>Task analysis</topic><topic>trust</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nam, Changjoo</creatorcontrib><creatorcontrib>Walker, Phillip</creatorcontrib><creatorcontrib>Li, Huao</creatorcontrib><creatorcontrib>Lewis, Michael</creatorcontrib><creatorcontrib>Sycara, Katia</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on human-machine systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nam, Changjoo</au><au>Walker, Phillip</au><au>Li, Huao</au><au>Lewis, Michael</au><au>Sycara, Katia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Models of Trust in Human Control of Swarms With Varied Levels of Autonomy</atitle><jtitle>IEEE transactions on human-machine systems</jtitle><stitle>THMS</stitle><date>2020-06</date><risdate>2020</risdate><volume>50</volume><issue>3</issue><spage>194</spage><epage>204</epage><pages>194-204</pages><issn>2168-2291</issn><eissn>2168-2305</eissn><coden>ITHSA6</coden><abstract>In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/THMS.2019.2896845</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-9169-0785</orcidid><orcidid>https://orcid.org/0000-0002-1013-9482</orcidid><orcidid>https://orcid.org/0000-0001-7823-5211</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2168-2291
ispartof IEEE transactions on human-machine systems, 2020-06, Vol.50 (3), p.194-204
issn 2168-2291
2168-2305
language eng
recordid cdi_ieee_primary_8651317
source IEEE Electronic Library (IEL)
subjects Algorithms
Automation
Autonomy
Computational modeling
Decision analysis
Human–robot interaction
human–swarm interaction
Machine learning
Markov analysis
Markov processes
multirobot systems
Physical properties
Predictive models
Robot control
Robot kinematics
Robot sensing systems
Search algorithms
Supervisory control
swarm robotics
Task analysis
trust
title Models of Trust in Human Control of Swarms With Varied Levels of Autonomy
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T21%3A10%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Models%20of%20Trust%20in%20Human%20Control%20of%20Swarms%20With%20Varied%20Levels%20of%20Autonomy&rft.jtitle=IEEE%20transactions%20on%20human-machine%20systems&rft.au=Nam,%20Changjoo&rft.date=2020-06&rft.volume=50&rft.issue=3&rft.spage=194&rft.epage=204&rft.pages=194-204&rft.issn=2168-2291&rft.eissn=2168-2305&rft.coden=ITHSA6&rft_id=info:doi/10.1109/THMS.2019.2896845&rft_dat=%3Cproquest_RIE%3E2406700830%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2406700830&rft_id=info:pmid/&rft_ieee_id=8651317&rfr_iscdi=true