Neurosymbolic Artificial Intelligence (Why, What, and How)

Humans interact with the environment using a combination of perception-transforming sensory inputs from their environment into symbols, and cognition-mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE intelligent systems 2023-05, Vol.38 (3), p.56-62
Hauptverfasser: Sheth, Amit, Roy, Kaushik, Gaur, Manas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 62
container_issue 3
container_start_page 56
container_title IEEE intelligent systems
container_volume 38
creator Sheth, Amit
Roy, Kaushik
Gaur, Manas
Sheth, Amit
description Humans interact with the environment using a combination of perception-transforming sensory inputs from their environment into symbols, and cognition-mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of artificial intelligence (AI), refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision making in safety-critical applications such as health care, criminal justice, and autonomous driving.
doi_str_mv 10.1109/MIS.2023.3268724
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_MIS_2023_3268724</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10148662</ieee_id><sourcerecordid>2825602850</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-b17fb7255092a89cf2fbcbaaa5db0cc57b25de59ec8662c75677effbf7c81d993</originalsourceid><addsrcrecordid>eNpNkD1PwzAQhi0EEqWwMzBEYgGpKfYljm22qgJaqcAAqKNlOzZ1lSbFSYX673HUDkx3w_Pex4PQNcFjQrB4eJ1_jAFDNs6g4AzyEzQgIicpAZGfxp72fcHgHF207RpHEhM-QI9vdheadr_RTeVNMgmdd954VSXzurNV5b9tbWxyt1ztR8lypbpRouoymTW_95fozKmqtVfHOkRfz0-f01m6eH-ZTyeL1ICALtWEOc2AUixAcWEcOG20UoqWGhtDmQZaWiqs4UUBhtGCMeucdsxwUgqRDdHtYe42ND8723Zy3exCHVdK4EALDJziSOEDZeI7bbBOboPfqLCXBMvekIyGZG9IHg3FyM0h4q21_3CS95dkfyjpYQw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2825602850</pqid></control><display><type>article</type><title>Neurosymbolic Artificial Intelligence (Why, What, and How)</title><source>IEEE Electronic Library (IEL)</source><creator>Sheth, Amit ; Roy, Kaushik ; Gaur, Manas ; Sheth, Amit</creator><contributor>Amit Sheth</contributor><creatorcontrib>Sheth, Amit ; Roy, Kaushik ; Gaur, Manas ; Sheth, Amit ; Amit Sheth</creatorcontrib><description>Humans interact with the environment using a combination of perception-transforming sensory inputs from their environment into symbols, and cognition-mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of artificial intelligence (AI), refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision making in safety-critical applications such as health care, criminal justice, and autonomous driving.</description><identifier>ISSN: 1541-1672</identifier><identifier>EISSN: 1941-1294</identifier><identifier>DOI: 10.1109/MIS.2023.3268724</identifier><identifier>CODEN: IISYF7</identifier><language>eng</language><publisher>Los Alamitos: IEEE</publisher><subject>Artificial intelligence ; Cognition ; Decision making ; Human factors ; Machine learning ; Medical services ; Neural networks ; Object recognition ; Pattern recognition ; Perception ; Reasoning ; Safety ; Safety critical ; Self-supervised learning ; Symbols</subject><ispartof>IEEE intelligent systems, 2023-05, Vol.38 (3), p.56-62</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-b17fb7255092a89cf2fbcbaaa5db0cc57b25de59ec8662c75677effbf7c81d993</citedby><cites>FETCH-LOGICAL-c292t-b17fb7255092a89cf2fbcbaaa5db0cc57b25de59ec8662c75677effbf7c81d993</cites><orcidid>0000-0001-6610-7845 ; 0000-0002-0021-5293 ; 0000-0002-5411-2230</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10148662$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10148662$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><contributor>Amit Sheth</contributor><creatorcontrib>Sheth, Amit</creatorcontrib><creatorcontrib>Roy, Kaushik</creatorcontrib><creatorcontrib>Gaur, Manas</creatorcontrib><creatorcontrib>Sheth, Amit</creatorcontrib><title>Neurosymbolic Artificial Intelligence (Why, What, and How)</title><title>IEEE intelligent systems</title><addtitle>MIS</addtitle><description>Humans interact with the environment using a combination of perception-transforming sensory inputs from their environment into symbols, and cognition-mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of artificial intelligence (AI), refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision making in safety-critical applications such as health care, criminal justice, and autonomous driving.</description><subject>Artificial intelligence</subject><subject>Cognition</subject><subject>Decision making</subject><subject>Human factors</subject><subject>Machine learning</subject><subject>Medical services</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Pattern recognition</subject><subject>Perception</subject><subject>Reasoning</subject><subject>Safety</subject><subject>Safety critical</subject><subject>Self-supervised learning</subject><subject>Symbols</subject><issn>1541-1672</issn><issn>1941-1294</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkD1PwzAQhi0EEqWwMzBEYgGpKfYljm22qgJaqcAAqKNlOzZ1lSbFSYX673HUDkx3w_Pex4PQNcFjQrB4eJ1_jAFDNs6g4AzyEzQgIicpAZGfxp72fcHgHF207RpHEhM-QI9vdheadr_RTeVNMgmdd954VSXzurNV5b9tbWxyt1ztR8lypbpRouoymTW_95fozKmqtVfHOkRfz0-f01m6eH-ZTyeL1ICALtWEOc2AUixAcWEcOG20UoqWGhtDmQZaWiqs4UUBhtGCMeucdsxwUgqRDdHtYe42ND8723Zy3exCHVdK4EALDJziSOEDZeI7bbBOboPfqLCXBMvekIyGZG9IHg3FyM0h4q21_3CS95dkfyjpYQw</recordid><startdate>20230501</startdate><enddate>20230501</enddate><creator>Sheth, Amit</creator><creator>Roy, Kaushik</creator><creator>Gaur, Manas</creator><creator>Sheth, Amit</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>E3H</scope><scope>F2A</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6610-7845</orcidid><orcidid>https://orcid.org/0000-0002-0021-5293</orcidid><orcidid>https://orcid.org/0000-0002-5411-2230</orcidid></search><sort><creationdate>20230501</creationdate><title>Neurosymbolic Artificial Intelligence (Why, What, and How)</title><author>Sheth, Amit ; Roy, Kaushik ; Gaur, Manas ; Sheth, Amit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-b17fb7255092a89cf2fbcbaaa5db0cc57b25de59ec8662c75677effbf7c81d993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial intelligence</topic><topic>Cognition</topic><topic>Decision making</topic><topic>Human factors</topic><topic>Machine learning</topic><topic>Medical services</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Pattern recognition</topic><topic>Perception</topic><topic>Reasoning</topic><topic>Safety</topic><topic>Safety critical</topic><topic>Self-supervised learning</topic><topic>Symbols</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sheth, Amit</creatorcontrib><creatorcontrib>Roy, Kaushik</creatorcontrib><creatorcontrib>Gaur, Manas</creatorcontrib><creatorcontrib>Sheth, Amit</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Library &amp; Information Sciences Abstracts (LISA)</collection><collection>Library &amp; Information Science Abstracts (LISA)</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE intelligent systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sheth, Amit</au><au>Roy, Kaushik</au><au>Gaur, Manas</au><au>Sheth, Amit</au><au>Amit Sheth</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neurosymbolic Artificial Intelligence (Why, What, and How)</atitle><jtitle>IEEE intelligent systems</jtitle><stitle>MIS</stitle><date>2023-05-01</date><risdate>2023</risdate><volume>38</volume><issue>3</issue><spage>56</spage><epage>62</epage><pages>56-62</pages><issn>1541-1672</issn><eissn>1941-1294</eissn><coden>IISYF7</coden><abstract>Humans interact with the environment using a combination of perception-transforming sensory inputs from their environment into symbols, and cognition-mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of artificial intelligence (AI), refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision making in safety-critical applications such as health care, criminal justice, and autonomous driving.</abstract><cop>Los Alamitos</cop><pub>IEEE</pub><doi>10.1109/MIS.2023.3268724</doi><tpages>7</tpages><orcidid>https://orcid.org/0000-0001-6610-7845</orcidid><orcidid>https://orcid.org/0000-0002-0021-5293</orcidid><orcidid>https://orcid.org/0000-0002-5411-2230</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1541-1672
ispartof IEEE intelligent systems, 2023-05, Vol.38 (3), p.56-62
issn 1541-1672
1941-1294
language eng
recordid cdi_crossref_primary_10_1109_MIS_2023_3268724
source IEEE Electronic Library (IEL)
subjects Artificial intelligence
Cognition
Decision making
Human factors
Machine learning
Medical services
Neural networks
Object recognition
Pattern recognition
Perception
Reasoning
Safety
Safety critical
Self-supervised learning
Symbols
title Neurosymbolic Artificial Intelligence (Why, What, and How)
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T14%3A58%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neurosymbolic%20Artificial%20Intelligence%20(Why,%20What,%20and%20How)&rft.jtitle=IEEE%20intelligent%20systems&rft.au=Sheth,%20Amit&rft.date=2023-05-01&rft.volume=38&rft.issue=3&rft.spage=56&rft.epage=62&rft.pages=56-62&rft.issn=1541-1672&rft.eissn=1941-1294&rft.coden=IISYF7&rft_id=info:doi/10.1109/MIS.2023.3268724&rft_dat=%3Cproquest_RIE%3E2825602850%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2825602850&rft_id=info:pmid/&rft_ieee_id=10148662&rfr_iscdi=true