Perceptual and Semantic Processing in Cognitive Robots

The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2021-09, Vol.10 (18), p.2216
Hauptverfasser: Bukhari, Syed Tanweer Shah, Qazi, Wajahat Mahmood
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 18
container_start_page 2216
container_title Electronics (Basel)
container_volume 10
creator Bukhari, Syed Tanweer Shah
Qazi, Wajahat Mahmood
description The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding and affordance learning from acquired knowledge. Affordance has been the driving force for agents to construct relationships between objects, their effects, and actions, whereas grounding is effective in the understanding of spatial maps of objects present in the environment. The main contribution of this paper is to propose a methodology for the extension of object affordance and grounding, the Bloom-based cognitive cycle, and the formulation of perceptual semantics for the context-based human–robot interaction. In this study, we implemented YOLOv3 to formulate visual perception and LSTM to identify the level of the cognitive cycle, as cognitive processes synchronized in the cognitive cycle. In addition, we used semantic networks and conceptual graphs as a method to represent knowledge in various dimensions related to the cognitive cycle. The visual perception showed average precision of 0.78, an average recall of 0.87, and an average F1 score of 0.80, indicating an improvement in the generation of semantic networks and conceptual graphs. The similarity index used for the lingual and visual association showed promising results and improves the overall experience of human–robot interaction.
doi_str_mv 10.3390/electronics10182216
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2576385223</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2576385223</sourcerecordid><originalsourceid>FETCH-LOGICAL-c322t-47840b717215a71cc8a456571366e5cec811aafe1e4d6b653424d79e51509ea13</originalsourceid><addsrcrecordid>eNptkE1LxDAYhIMouKz7C7wUPFfzJk3SHKX4BQsufpxLmr5dsnSTNUkF_72V9eDBucwcHmZgCLkEes25pjc4os0xeGcTUKgZA3lCFowqXWqm2emffE5WKe3oLA285nRB5AajxUOezFgY3xevuDc-O1tsYrCYkvPbwvmiCVvvsvvE4iV0IacLcjaYMeHq15fk_f7urXks188PT83turScsVxWqq5op0AxEEaBtbWphBQKuJQoLNoawJgBAatedlLwilW90ihAUI0G-JJcHXsPMXxMmHK7C1P082TLhJK8FozxmeJHysaQUsShPUS3N_GrBdr-fNT-8xH_BkWqW3A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2576385223</pqid></control><display><type>article</type><title>Perceptual and Semantic Processing in Cognitive Robots</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Bukhari, Syed Tanweer Shah ; Qazi, Wajahat Mahmood</creator><creatorcontrib>Bukhari, Syed Tanweer Shah ; Qazi, Wajahat Mahmood</creatorcontrib><description>The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding and affordance learning from acquired knowledge. Affordance has been the driving force for agents to construct relationships between objects, their effects, and actions, whereas grounding is effective in the understanding of spatial maps of objects present in the environment. The main contribution of this paper is to propose a methodology for the extension of object affordance and grounding, the Bloom-based cognitive cycle, and the formulation of perceptual semantics for the context-based human–robot interaction. In this study, we implemented YOLOv3 to formulate visual perception and LSTM to identify the level of the cognitive cycle, as cognitive processes synchronized in the cognitive cycle. In addition, we used semantic networks and conceptual graphs as a method to represent knowledge in various dimensions related to the cognitive cycle. The visual perception showed average precision of 0.78, an average recall of 0.87, and an average F1 score of 0.80, indicating an improvement in the generation of semantic networks and conceptual graphs. The similarity index used for the lingual and visual association showed promising results and improves the overall experience of human–robot interaction.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics10182216</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Architecture ; Cognition &amp; reasoning ; Graphs ; Knowledge ; Knowledge acquisition ; Memory ; Robots ; Semantics ; Taxonomy ; Visual perception</subject><ispartof>Electronics (Basel), 2021-09, Vol.10 (18), p.2216</ispartof><rights>2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c322t-47840b717215a71cc8a456571366e5cec811aafe1e4d6b653424d79e51509ea13</citedby><cites>FETCH-LOGICAL-c322t-47840b717215a71cc8a456571366e5cec811aafe1e4d6b653424d79e51509ea13</cites><orcidid>0000-0002-8728-2623</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Bukhari, Syed Tanweer Shah</creatorcontrib><creatorcontrib>Qazi, Wajahat Mahmood</creatorcontrib><title>Perceptual and Semantic Processing in Cognitive Robots</title><title>Electronics (Basel)</title><description>The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding and affordance learning from acquired knowledge. Affordance has been the driving force for agents to construct relationships between objects, their effects, and actions, whereas grounding is effective in the understanding of spatial maps of objects present in the environment. The main contribution of this paper is to propose a methodology for the extension of object affordance and grounding, the Bloom-based cognitive cycle, and the formulation of perceptual semantics for the context-based human–robot interaction. In this study, we implemented YOLOv3 to formulate visual perception and LSTM to identify the level of the cognitive cycle, as cognitive processes synchronized in the cognitive cycle. In addition, we used semantic networks and conceptual graphs as a method to represent knowledge in various dimensions related to the cognitive cycle. The visual perception showed average precision of 0.78, an average recall of 0.87, and an average F1 score of 0.80, indicating an improvement in the generation of semantic networks and conceptual graphs. The similarity index used for the lingual and visual association showed promising results and improves the overall experience of human–robot interaction.</description><subject>Architecture</subject><subject>Cognition &amp; reasoning</subject><subject>Graphs</subject><subject>Knowledge</subject><subject>Knowledge acquisition</subject><subject>Memory</subject><subject>Robots</subject><subject>Semantics</subject><subject>Taxonomy</subject><subject>Visual perception</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptkE1LxDAYhIMouKz7C7wUPFfzJk3SHKX4BQsufpxLmr5dsnSTNUkF_72V9eDBucwcHmZgCLkEes25pjc4os0xeGcTUKgZA3lCFowqXWqm2emffE5WKe3oLA285nRB5AajxUOezFgY3xevuDc-O1tsYrCYkvPbwvmiCVvvsvvE4iV0IacLcjaYMeHq15fk_f7urXks188PT83turScsVxWqq5op0AxEEaBtbWphBQKuJQoLNoawJgBAatedlLwilW90ihAUI0G-JJcHXsPMXxMmHK7C1P082TLhJK8FozxmeJHysaQUsShPUS3N_GrBdr-fNT-8xH_BkWqW3A</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Bukhari, Syed Tanweer Shah</creator><creator>Qazi, Wajahat Mahmood</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-8728-2623</orcidid></search><sort><creationdate>20210901</creationdate><title>Perceptual and Semantic Processing in Cognitive Robots</title><author>Bukhari, Syed Tanweer Shah ; Qazi, Wajahat Mahmood</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c322t-47840b717215a71cc8a456571366e5cec811aafe1e4d6b653424d79e51509ea13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Architecture</topic><topic>Cognition &amp; reasoning</topic><topic>Graphs</topic><topic>Knowledge</topic><topic>Knowledge acquisition</topic><topic>Memory</topic><topic>Robots</topic><topic>Semantics</topic><topic>Taxonomy</topic><topic>Visual perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bukhari, Syed Tanweer Shah</creatorcontrib><creatorcontrib>Qazi, Wajahat Mahmood</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bukhari, Syed Tanweer Shah</au><au>Qazi, Wajahat Mahmood</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Perceptual and Semantic Processing in Cognitive Robots</atitle><jtitle>Electronics (Basel)</jtitle><date>2021-09-01</date><risdate>2021</risdate><volume>10</volume><issue>18</issue><spage>2216</spage><pages>2216-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding and affordance learning from acquired knowledge. Affordance has been the driving force for agents to construct relationships between objects, their effects, and actions, whereas grounding is effective in the understanding of spatial maps of objects present in the environment. The main contribution of this paper is to propose a methodology for the extension of object affordance and grounding, the Bloom-based cognitive cycle, and the formulation of perceptual semantics for the context-based human–robot interaction. In this study, we implemented YOLOv3 to formulate visual perception and LSTM to identify the level of the cognitive cycle, as cognitive processes synchronized in the cognitive cycle. In addition, we used semantic networks and conceptual graphs as a method to represent knowledge in various dimensions related to the cognitive cycle. The visual perception showed average precision of 0.78, an average recall of 0.87, and an average F1 score of 0.80, indicating an improvement in the generation of semantic networks and conceptual graphs. The similarity index used for the lingual and visual association showed promising results and improves the overall experience of human–robot interaction.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics10182216</doi><orcidid>https://orcid.org/0000-0002-8728-2623</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2021-09, Vol.10 (18), p.2216
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2576385223
source MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Architecture
Cognition & reasoning
Graphs
Knowledge
Knowledge acquisition
Memory
Robots
Semantics
Taxonomy
Visual perception
title Perceptual and Semantic Processing in Cognitive Robots
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T00%3A06%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Perceptual%20and%20Semantic%20Processing%20in%20Cognitive%20Robots&rft.jtitle=Electronics%20(Basel)&rft.au=Bukhari,%20Syed%20Tanweer%20Shah&rft.date=2021-09-01&rft.volume=10&rft.issue=18&rft.spage=2216&rft.pages=2216-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics10182216&rft_dat=%3Cproquest_cross%3E2576385223%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2576385223&rft_id=info:pmid/&rfr_iscdi=true