Neural networks based on peano curves and hairy neurons

Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features represen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Telematics and informatics 1990, Vol.7 (3), p.403-430
1. Verfasser: Szu, Harold H.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 430
container_issue 3
container_start_page 403
container_title Telematics and informatics
container_volume 7
creator Szu, Harold H.
description Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features representing images so discovered by NI can be passed to AI for further processing to increase efficiency and reliability, for AI can follow any formulated algorithm exactly, and provide (an imagery context knowledge base as) a constraint on further NI neurocomputing. Such integration, with a dialogue between the two, is believed actually to happen in the human visual system in tracking of moving objects, for instance. Consequently, we propose that both AI and NI are two sides of an intelligence coin that can together solve the pattern recognition problem and unify an intelligent machine. The rule-based minimax cost function used for feature search can, furthermore, give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical minimax cost function consists of the sample variance of each class in the numerator, and separation of the centers of classes in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously. This Taylor expansion variable must be a one-dimensional array that traces along a space-filling curve which, as Peano proved, preserves two-dimensional neighborhood relationships. Therefore, this one-dimensional array can support Taylor series expansion in terms of neighborhood derivatives. An adaptive space-filling capability is postulated for useful neuronic representations by using a top layer neural network, similar to Adaptive Resonance Theory, when more detailed spatial resolution becomes desirable at the place in the picture where an interesting change occurs. A self-consistent perturbation expansion can speed up the training procedure. A hairy neuron model that has two internal degrees of freedom is useful to determine a dynamically self-reconfigurable architecture. The convergence theorem of such a morphology is given for a hairy neural network under arbitrary time scales.
doi_str_mv 10.1016/S0736-5853(05)80017-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_25786619</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0736585305800176</els_id><sourcerecordid>25786619</sourcerecordid><originalsourceid>FETCH-LOGICAL-c255t-e61e0508686dceb316e3664bae2023ca704cea3dc1dff209e2764372b381a0f03</originalsourceid><addsrcrecordid>eNqFkM1KxDAYRYMoOI4-gpCV6KL6JWl-uhIR_2DQhQruQpp-xWqnGZN2ZN7ezoy4dXU35164h5BjBucMmLp4Bi1UJo0UpyDPDADTmdohE2Z0kQmev-2SyR-yTw5S-lgzrGAToh9xiK6lHfbfIX4mWrqEFQ0dXaDrAvVDXGKirqvou2viagSHGLp0SPZq1yY8-s0peb29ebm-z2ZPdw_XV7PMcyn7DBVDkGCUUZXHUjCFQqm8dMiBC-805B6dqDyr6ppDgVyrXGheCsMc1CCm5GS7u4jha8DU23mTPLat6zAMyXKpjVKsGEG5BX0MKUWs7SI2cxdXloFda7IbTXbtwIK0G01Wjb3LbQ_HF8sGo02-wc5j1UT0va1C88_CDzxxbrw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>25786619</pqid></control><display><type>article</type><title>Neural networks based on peano curves and hairy neurons</title><source>Elsevier ScienceDirect Journals</source><creator>Szu, Harold H.</creator><creatorcontrib>Szu, Harold H.</creatorcontrib><description>Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features representing images so discovered by NI can be passed to AI for further processing to increase efficiency and reliability, for AI can follow any formulated algorithm exactly, and provide (an imagery context knowledge base as) a constraint on further NI neurocomputing. Such integration, with a dialogue between the two, is believed actually to happen in the human visual system in tracking of moving objects, for instance. Consequently, we propose that both AI and NI are two sides of an intelligence coin that can together solve the pattern recognition problem and unify an intelligent machine. The rule-based minimax cost function used for feature search can, furthermore, give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical minimax cost function consists of the sample variance of each class in the numerator, and separation of the centers of classes in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously. This Taylor expansion variable must be a one-dimensional array that traces along a space-filling curve which, as Peano proved, preserves two-dimensional neighborhood relationships. Therefore, this one-dimensional array can support Taylor series expansion in terms of neighborhood derivatives. An adaptive space-filling capability is postulated for useful neuronic representations by using a top layer neural network, similar to Adaptive Resonance Theory, when more detailed spatial resolution becomes desirable at the place in the picture where an interesting change occurs. A self-consistent perturbation expansion can speed up the training procedure. A hairy neuron model that has two internal degrees of freedom is useful to determine a dynamically self-reconfigurable architecture. The convergence theorem of such a morphology is given for a hairy neural network under arbitrary time scales.</description><identifier>ISSN: 0736-5853</identifier><identifier>EISSN: 1879-324X</identifier><identifier>DOI: 10.1016/S0736-5853(05)80017-6</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><ispartof>Telematics and informatics, 1990, Vol.7 (3), p.403-430</ispartof><rights>1990 Pergamon Press plc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c255t-e61e0508686dceb316e3664bae2023ca704cea3dc1dff209e2764372b381a0f03</citedby><cites>FETCH-LOGICAL-c255t-e61e0508686dceb316e3664bae2023ca704cea3dc1dff209e2764372b381a0f03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0736585305800176$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,4010,27900,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Szu, Harold H.</creatorcontrib><title>Neural networks based on peano curves and hairy neurons</title><title>Telematics and informatics</title><description>Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features representing images so discovered by NI can be passed to AI for further processing to increase efficiency and reliability, for AI can follow any formulated algorithm exactly, and provide (an imagery context knowledge base as) a constraint on further NI neurocomputing. Such integration, with a dialogue between the two, is believed actually to happen in the human visual system in tracking of moving objects, for instance. Consequently, we propose that both AI and NI are two sides of an intelligence coin that can together solve the pattern recognition problem and unify an intelligent machine. The rule-based minimax cost function used for feature search can, furthermore, give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical minimax cost function consists of the sample variance of each class in the numerator, and separation of the centers of classes in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously. This Taylor expansion variable must be a one-dimensional array that traces along a space-filling curve which, as Peano proved, preserves two-dimensional neighborhood relationships. Therefore, this one-dimensional array can support Taylor series expansion in terms of neighborhood derivatives. An adaptive space-filling capability is postulated for useful neuronic representations by using a top layer neural network, similar to Adaptive Resonance Theory, when more detailed spatial resolution becomes desirable at the place in the picture where an interesting change occurs. A self-consistent perturbation expansion can speed up the training procedure. A hairy neuron model that has two internal degrees of freedom is useful to determine a dynamically self-reconfigurable architecture. The convergence theorem of such a morphology is given for a hairy neural network under arbitrary time scales.</description><issn>0736-5853</issn><issn>1879-324X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1990</creationdate><recordtype>article</recordtype><recordid>eNqFkM1KxDAYRYMoOI4-gpCV6KL6JWl-uhIR_2DQhQruQpp-xWqnGZN2ZN7ezoy4dXU35164h5BjBucMmLp4Bi1UJo0UpyDPDADTmdohE2Z0kQmev-2SyR-yTw5S-lgzrGAToh9xiK6lHfbfIX4mWrqEFQ0dXaDrAvVDXGKirqvou2viagSHGLp0SPZq1yY8-s0peb29ebm-z2ZPdw_XV7PMcyn7DBVDkGCUUZXHUjCFQqm8dMiBC-805B6dqDyr6ppDgVyrXGheCsMc1CCm5GS7u4jha8DU23mTPLat6zAMyXKpjVKsGEG5BX0MKUWs7SI2cxdXloFda7IbTXbtwIK0G01Wjb3LbQ_HF8sGo02-wc5j1UT0va1C88_CDzxxbrw</recordid><startdate>1990</startdate><enddate>1990</enddate><creator>Szu, Harold H.</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>1990</creationdate><title>Neural networks based on peano curves and hairy neurons</title><author>Szu, Harold H.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c255t-e61e0508686dceb316e3664bae2023ca704cea3dc1dff209e2764372b381a0f03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1990</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Szu, Harold H.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Telematics and informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Szu, Harold H.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural networks based on peano curves and hairy neurons</atitle><jtitle>Telematics and informatics</jtitle><date>1990</date><risdate>1990</risdate><volume>7</volume><issue>3</issue><spage>403</spage><epage>430</epage><pages>403-430</pages><issn>0736-5853</issn><eissn>1879-324X</eissn><abstract>Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features representing images so discovered by NI can be passed to AI for further processing to increase efficiency and reliability, for AI can follow any formulated algorithm exactly, and provide (an imagery context knowledge base as) a constraint on further NI neurocomputing. Such integration, with a dialogue between the two, is believed actually to happen in the human visual system in tracking of moving objects, for instance. Consequently, we propose that both AI and NI are two sides of an intelligence coin that can together solve the pattern recognition problem and unify an intelligent machine. The rule-based minimax cost function used for feature search can, furthermore, give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical minimax cost function consists of the sample variance of each class in the numerator, and separation of the centers of classes in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously. This Taylor expansion variable must be a one-dimensional array that traces along a space-filling curve which, as Peano proved, preserves two-dimensional neighborhood relationships. Therefore, this one-dimensional array can support Taylor series expansion in terms of neighborhood derivatives. An adaptive space-filling capability is postulated for useful neuronic representations by using a top layer neural network, similar to Adaptive Resonance Theory, when more detailed spatial resolution becomes desirable at the place in the picture where an interesting change occurs. A self-consistent perturbation expansion can speed up the training procedure. A hairy neuron model that has two internal degrees of freedom is useful to determine a dynamically self-reconfigurable architecture. The convergence theorem of such a morphology is given for a hairy neural network under arbitrary time scales.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/S0736-5853(05)80017-6</doi><tpages>28</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0736-5853
ispartof Telematics and informatics, 1990, Vol.7 (3), p.403-430
issn 0736-5853
1879-324X
language eng
recordid cdi_proquest_miscellaneous_25786619
source Elsevier ScienceDirect Journals
title Neural networks based on peano curves and hairy neurons
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T02%3A30%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20networks%20based%20on%20peano%20curves%20and%20hairy%20neurons&rft.jtitle=Telematics%20and%20informatics&rft.au=Szu,%20Harold%20H.&rft.date=1990&rft.volume=7&rft.issue=3&rft.spage=403&rft.epage=430&rft.pages=403-430&rft.issn=0736-5853&rft.eissn=1879-324X&rft_id=info:doi/10.1016/S0736-5853(05)80017-6&rft_dat=%3Cproquest_cross%3E25786619%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=25786619&rft_id=info:pmid/&rft_els_id=S0736585305800176&rfr_iscdi=true