Semantics in an Intelligent Control System [and Discussion]
Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy...
Gespeichert in:
Veröffentlicht in: | Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences physical, and engineering sciences, 1994-10, Vol.349 (1689), p.43-58 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 58 |
---|---|
container_issue | 1689 |
container_start_page | 43 |
container_title | Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences |
container_volume | 349 |
creator | Sloman, Aaron Prescott, A. Shadbolt, N. Steedman, M. |
description | Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand
how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind
is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of levels of implementation.
AI needs to explore alternative control architectures and their implications for human, animal and artificial minds. Only
when we have a good theory of actual and possible architectures can we solve old problems about the concept of mind and causal
roles of desires, beliefs, intentions, etc. The global information level `virtual machine' architecture is more relevant to
this than detailed mechanisms. For example, differences between connectionist and symbolic implementations may be of minor
importance. An architecture provides a framework for systematically generating concepts of possible states and processes.
Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they
are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems.
The paper outlines some requirements for such architectures showing the importance of an idea shared between engineers and
philosophers: the concept of `semantic information'. |
doi_str_mv | 10.1098/rsta.1994.0112 |
format | Article |
fullrecord | <record><control><sourceid>jstor_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1098_rsta_1994_0112</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>54375</jstor_id><sourcerecordid>54375</sourcerecordid><originalsourceid>FETCH-LOGICAL-c435t-d6e947cc125ba81e57523accf5375f3e9d413c9f954e7a4d480a703c154e88d33</originalsourceid><addsrcrecordid>eNp9UV1LwzAULaLgnL764FP_QLekSdoGQRjzazAQ3ARBJMQ03TK6dCSpUn-96SriEPeUHO4599xzbxCcQzCAgGZDYx0fQErxAEAYHwQ9iFMYxTSJD_0fJTgiAD0fByfWroCnJCTuBZczuebaKWFDpUOuw4l2sizVQmoXjivtTFWGs8Y6uQ5fuM7Da2VFba2q9OtpcFTw0sqz77cfPN3ezMf30fThbjIeTSOBEXFRnkiKUyFgTN54BiVJSYy4EAVBKSmQpDmGSNCCEixTjnOcAZ4CJKDHWZYj1A8GXV9hKmuNLNjGqDU3DYOAtdFZG5210Vkb3QtQJzBV4werhJKuYauqNtrD_1V2n-pxNh95MnhHmCqYZJSBDEGQoIwk7FNttu1aAvMEpqytJdvSdm3-ul50rivrKvOTjGC_HF8cdsWlWiw_lJFsZzYPNr5Z67d1wu2urvYqWnPhz-rP-1vHiros2SYv0Bd-r7Pj</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Semantics in an Intelligent Control System [and Discussion]</title><source>JSTOR Mathematics & Statistics</source><source>JSTOR Archive Collection A-Z Listing</source><creator>Sloman, Aaron ; Prescott, A. ; Shadbolt, N. ; Steedman, M.</creator><creatorcontrib>Sloman, Aaron ; Prescott, A. ; Shadbolt, N. ; Steedman, M.</creatorcontrib><description>Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand
how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind
is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of levels of implementation.
AI needs to explore alternative control architectures and their implications for human, animal and artificial minds. Only
when we have a good theory of actual and possible architectures can we solve old problems about the concept of mind and causal
roles of desires, beliefs, intentions, etc. The global information level `virtual machine' architecture is more relevant to
this than detailed mechanisms. For example, differences between connectionist and symbolic implementations may be of minor
importance. An architecture provides a framework for systematically generating concepts of possible states and processes.
Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they
are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems.
The paper outlines some requirements for such architectures showing the importance of an idea shared between engineers and
philosophers: the concept of `semantic information'.</description><identifier>ISSN: 1364-503X</identifier><identifier>ISSN: 0962-8428</identifier><identifier>EISSN: 1471-2962</identifier><identifier>EISSN: 2054-0299</identifier><identifier>DOI: 10.1098/rsta.1994.0112</identifier><language>eng</language><publisher>London: The Royal Society</publisher><subject>Architecture ; Control systems ; Desire ; Humans ; Machinery ; Mind ; Physics ; Semantics ; Syntactics ; Syntax</subject><ispartof>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences, 1994-10, Vol.349 (1689), p.43-58</ispartof><rights>Copyright 1994 The Royal Society</rights><rights>Scanned images copyright © 2017, Royal Society</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c435t-d6e947cc125ba81e57523accf5375f3e9d413c9f954e7a4d480a703c154e88d33</citedby><cites>FETCH-LOGICAL-c435t-d6e947cc125ba81e57523accf5375f3e9d413c9f954e7a4d480a703c154e88d33</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/54375$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/54375$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,780,784,803,832,27922,27923,58015,58019,58248,58252</link.rule.ids></links><search><creatorcontrib>Sloman, Aaron</creatorcontrib><creatorcontrib>Prescott, A.</creatorcontrib><creatorcontrib>Shadbolt, N.</creatorcontrib><creatorcontrib>Steedman, M.</creatorcontrib><title>Semantics in an Intelligent Control System [and Discussion]</title><title>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences</title><addtitle>Phil. Trans. R. Soc. Lond. A</addtitle><description>Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand
how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind
is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of levels of implementation.
AI needs to explore alternative control architectures and their implications for human, animal and artificial minds. Only
when we have a good theory of actual and possible architectures can we solve old problems about the concept of mind and causal
roles of desires, beliefs, intentions, etc. The global information level `virtual machine' architecture is more relevant to
this than detailed mechanisms. For example, differences between connectionist and symbolic implementations may be of minor
importance. An architecture provides a framework for systematically generating concepts of possible states and processes.
Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they
are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems.
The paper outlines some requirements for such architectures showing the importance of an idea shared between engineers and
philosophers: the concept of `semantic information'.</description><subject>Architecture</subject><subject>Control systems</subject><subject>Desire</subject><subject>Humans</subject><subject>Machinery</subject><subject>Mind</subject><subject>Physics</subject><subject>Semantics</subject><subject>Syntactics</subject><subject>Syntax</subject><issn>1364-503X</issn><issn>0962-8428</issn><issn>1471-2962</issn><issn>2054-0299</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1994</creationdate><recordtype>article</recordtype><recordid>eNp9UV1LwzAULaLgnL764FP_QLekSdoGQRjzazAQ3ARBJMQ03TK6dCSpUn-96SriEPeUHO4599xzbxCcQzCAgGZDYx0fQErxAEAYHwQ9iFMYxTSJD_0fJTgiAD0fByfWroCnJCTuBZczuebaKWFDpUOuw4l2sizVQmoXjivtTFWGs8Y6uQ5fuM7Da2VFba2q9OtpcFTw0sqz77cfPN3ezMf30fThbjIeTSOBEXFRnkiKUyFgTN54BiVJSYy4EAVBKSmQpDmGSNCCEixTjnOcAZ4CJKDHWZYj1A8GXV9hKmuNLNjGqDU3DYOAtdFZG5210Vkb3QtQJzBV4werhJKuYauqNtrD_1V2n-pxNh95MnhHmCqYZJSBDEGQoIwk7FNttu1aAvMEpqytJdvSdm3-ul50rivrKvOTjGC_HF8cdsWlWiw_lJFsZzYPNr5Z67d1wu2urvYqWnPhz-rP-1vHiros2SYv0Bd-r7Pj</recordid><startdate>19941015</startdate><enddate>19941015</enddate><creator>Sloman, Aaron</creator><creator>Prescott, A.</creator><creator>Shadbolt, N.</creator><creator>Steedman, M.</creator><general>The Royal Society</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>19941015</creationdate><title>Semantics in an Intelligent Control System [and Discussion]</title><author>Sloman, Aaron ; Prescott, A. ; Shadbolt, N. ; Steedman, M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c435t-d6e947cc125ba81e57523accf5375f3e9d413c9f954e7a4d480a703c154e88d33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1994</creationdate><topic>Architecture</topic><topic>Control systems</topic><topic>Desire</topic><topic>Humans</topic><topic>Machinery</topic><topic>Mind</topic><topic>Physics</topic><topic>Semantics</topic><topic>Syntactics</topic><topic>Syntax</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sloman, Aaron</creatorcontrib><creatorcontrib>Prescott, A.</creatorcontrib><creatorcontrib>Shadbolt, N.</creatorcontrib><creatorcontrib>Steedman, M.</creatorcontrib><collection>CrossRef</collection><jtitle>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sloman, Aaron</au><au>Prescott, A.</au><au>Shadbolt, N.</au><au>Steedman, M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semantics in an Intelligent Control System [and Discussion]</atitle><jtitle>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences</jtitle><stitle>Phil. Trans. R. Soc. Lond. A</stitle><date>1994-10-15</date><risdate>1994</risdate><volume>349</volume><issue>1689</issue><spage>43</spage><epage>58</epage><pages>43-58</pages><issn>1364-503X</issn><issn>0962-8428</issn><eissn>1471-2962</eissn><eissn>2054-0299</eissn><abstract>Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand
how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind
is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of levels of implementation.
AI needs to explore alternative control architectures and their implications for human, animal and artificial minds. Only
when we have a good theory of actual and possible architectures can we solve old problems about the concept of mind and causal
roles of desires, beliefs, intentions, etc. The global information level `virtual machine' architecture is more relevant to
this than detailed mechanisms. For example, differences between connectionist and symbolic implementations may be of minor
importance. An architecture provides a framework for systematically generating concepts of possible states and processes.
Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they
are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems.
The paper outlines some requirements for such architectures showing the importance of an idea shared between engineers and
philosophers: the concept of `semantic information'.</abstract><cop>London</cop><pub>The Royal Society</pub><doi>10.1098/rsta.1994.0112</doi><tpages>16</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1364-503X |
ispartof | Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences, 1994-10, Vol.349 (1689), p.43-58 |
issn | 1364-503X 0962-8428 1471-2962 2054-0299 |
language | eng |
recordid | cdi_crossref_primary_10_1098_rsta_1994_0112 |
source | JSTOR Mathematics & Statistics; JSTOR Archive Collection A-Z Listing |
subjects | Architecture Control systems Desire Humans Machinery Mind Physics Semantics Syntactics Syntax |
title | Semantics in an Intelligent Control System [and Discussion] |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T11%3A14%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semantics%20in%20an%20Intelligent%20Control%20System%20%5Band%20Discussion%5D&rft.jtitle=Philosophical%20transactions%20of%20the%20Royal%20Society%20of%20London.%20Series%20A:%20Mathematical,%20physical,%20and%20engineering%20sciences&rft.au=Sloman,%20Aaron&rft.date=1994-10-15&rft.volume=349&rft.issue=1689&rft.spage=43&rft.epage=58&rft.pages=43-58&rft.issn=1364-503X&rft.eissn=1471-2962&rft_id=info:doi/10.1098/rsta.1994.0112&rft_dat=%3Cjstor_cross%3E54375%3C/jstor_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_jstor_id=54375&rfr_iscdi=true |