Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action

We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Adaptive behavior 2016-10, Vol.24 (5), p.350-372
Hauptverfasser: Silva, Rui, Louro, Luís, Malheiro, Tiago, Erlhagen, Wolfram, Bicho, Estela
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 372
container_issue 5
container_start_page 350
container_title Adaptive behavior
container_volume 24
creator Silva, Rui
Louro, Luís
Malheiro, Tiago
Erlhagen, Wolfram
Bicho, Estela
description We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.
doi_str_mv 10.1177/1059712316665451
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1864580562</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_1059712316665451</sage_id><sourcerecordid>1864580562</sourcerecordid><originalsourceid>FETCH-LOGICAL-c484t-a9f992cd7e44ba46d4e70972c20e95fba65e0cf02eb6f3d01eb7deaa96e16183</originalsourceid><addsrcrecordid>eNqNkb1rwzAQxUVpoWnavaPGLm4lW5aisYR-QaBLdnOWT4mCLaWSPOS_r006FQqd7sH7vTu4R8g9Z4-cK_XEWa0VLysupaxFzS_IgivBi1JW1eWkJ7uY_Wtyk9KBMSYnfkHGdRha553fUecz-uyCp-A7ikOYNfQ0Zcg4uRYjejMrCrQ7eRicoR7HODHWYd9RiGbvMpo8RqQ2RLofB_BFDG3I9BCmAxTMvPWWXFnoE979zCXZvr5s1-_F5vPtY_28KYxYiVyAtlqXplMoRAtCdgIV06o0JUNd2xZkjcxYVmIrbdUxjq3qEEBL5JKvqiV5OK89xvA1YsrN4JLBvgePYUwNX0lRr1gty3-gQgqmGZ9RdkZNDClFtM0xugHiqeGsmbtofncxRYpzJMEOm0MY4_TY9Df_De8NiwA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1846409012</pqid></control><display><type>article</type><title>Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action</title><source>Access via SAGE</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Silva, Rui ; Louro, Luís ; Malheiro, Tiago ; Erlhagen, Wolfram ; Bicho, Estela</creator><creatorcontrib>Silva, Rui ; Louro, Luís ; Malheiro, Tiago ; Erlhagen, Wolfram ; Bicho, Estela</creatorcontrib><description>We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.</description><identifier>ISSN: 1059-7123</identifier><identifier>EISSN: 1741-2633</identifier><identifier>DOI: 10.1177/1059712316665451</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Activation ; Architecture ; Dynamical systems ; Dynamics ; Human behavior ; Inference ; Populations ; Robots</subject><ispartof>Adaptive behavior, 2016-10, Vol.24 (5), p.350-372</ispartof><rights>The Author(s) 2016</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c484t-a9f992cd7e44ba46d4e70972c20e95fba65e0cf02eb6f3d01eb7deaa96e16183</citedby><cites>FETCH-LOGICAL-c484t-a9f992cd7e44ba46d4e70972c20e95fba65e0cf02eb6f3d01eb7deaa96e16183</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/1059712316665451$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/1059712316665451$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>315,782,786,21826,27931,27932,43628,43629</link.rule.ids></links><search><creatorcontrib>Silva, Rui</creatorcontrib><creatorcontrib>Louro, Luís</creatorcontrib><creatorcontrib>Malheiro, Tiago</creatorcontrib><creatorcontrib>Erlhagen, Wolfram</creatorcontrib><creatorcontrib>Bicho, Estela</creatorcontrib><title>Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action</title><title>Adaptive behavior</title><description>We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.</description><subject>Activation</subject><subject>Architecture</subject><subject>Dynamical systems</subject><subject>Dynamics</subject><subject>Human behavior</subject><subject>Inference</subject><subject>Populations</subject><subject>Robots</subject><issn>1059-7123</issn><issn>1741-2633</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><recordid>eNqNkb1rwzAQxUVpoWnavaPGLm4lW5aisYR-QaBLdnOWT4mCLaWSPOS_r006FQqd7sH7vTu4R8g9Z4-cK_XEWa0VLysupaxFzS_IgivBi1JW1eWkJ7uY_Wtyk9KBMSYnfkHGdRha553fUecz-uyCp-A7ikOYNfQ0Zcg4uRYjejMrCrQ7eRicoR7HODHWYd9RiGbvMpo8RqQ2RLofB_BFDG3I9BCmAxTMvPWWXFnoE979zCXZvr5s1-_F5vPtY_28KYxYiVyAtlqXplMoRAtCdgIV06o0JUNd2xZkjcxYVmIrbdUxjq3qEEBL5JKvqiV5OK89xvA1YsrN4JLBvgePYUwNX0lRr1gty3-gQgqmGZ9RdkZNDClFtM0xugHiqeGsmbtofncxRYpzJMEOm0MY4_TY9Df_De8NiwA</recordid><startdate>20161001</startdate><enddate>20161001</enddate><creator>Silva, Rui</creator><creator>Louro, Luís</creator><creator>Malheiro, Tiago</creator><creator>Erlhagen, Wolfram</creator><creator>Bicho, Estela</creator><general>SAGE Publications</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7QG</scope><scope>7TK</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20161001</creationdate><title>Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action</title><author>Silva, Rui ; Louro, Luís ; Malheiro, Tiago ; Erlhagen, Wolfram ; Bicho, Estela</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c484t-a9f992cd7e44ba46d4e70972c20e95fba65e0cf02eb6f3d01eb7deaa96e16183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Activation</topic><topic>Architecture</topic><topic>Dynamical systems</topic><topic>Dynamics</topic><topic>Human behavior</topic><topic>Inference</topic><topic>Populations</topic><topic>Robots</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Silva, Rui</creatorcontrib><creatorcontrib>Louro, Luís</creatorcontrib><creatorcontrib>Malheiro, Tiago</creatorcontrib><creatorcontrib>Erlhagen, Wolfram</creatorcontrib><creatorcontrib>Bicho, Estela</creatorcontrib><collection>CrossRef</collection><collection>Animal Behavior Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Adaptive behavior</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Silva, Rui</au><au>Louro, Luís</au><au>Malheiro, Tiago</au><au>Erlhagen, Wolfram</au><au>Bicho, Estela</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action</atitle><jtitle>Adaptive behavior</jtitle><date>2016-10-01</date><risdate>2016</risdate><volume>24</volume><issue>5</issue><spage>350</spage><epage>372</epage><pages>350-372</pages><issn>1059-7123</issn><eissn>1741-2633</eissn><abstract>We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/1059712316665451</doi><tpages>23</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1059-7123
ispartof Adaptive behavior, 2016-10, Vol.24 (5), p.350-372
issn 1059-7123
1741-2633
language eng
recordid cdi_proquest_miscellaneous_1864580562
source Access via SAGE; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Activation
Architecture
Dynamical systems
Dynamics
Human behavior
Inference
Populations
Robots
title Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-08T15%3A28%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Combining%20intention%20and%20emotional%20state%20inference%20in%20a%20dynamic%20neural%20field%20architecture%20for%20human-robot%20joint%20action&rft.jtitle=Adaptive%20behavior&rft.au=Silva,%20Rui&rft.date=2016-10-01&rft.volume=24&rft.issue=5&rft.spage=350&rft.epage=372&rft.pages=350-372&rft.issn=1059-7123&rft.eissn=1741-2633&rft_id=info:doi/10.1177/1059712316665451&rft_dat=%3Cproquest_cross%3E1864580562%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1846409012&rft_id=info:pmid/&rft_sage_id=10.1177_1059712316665451&rfr_iscdi=true