Neural networks based on peano curves and hairy neurons

Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features represen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Telematics and informatics 1990, Vol.7 (3), p.403-430
1. Verfasser: Szu, Harold H.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Neural Intelligence (NI) can automatically extract pattern features given an Artificial Intelligence (AI) performance rule. For example, neurocomputing searches for features that satisfy a rule-based criterion: intraclass-minimum-interclass-maximum cost function among features. The features representing images so discovered by NI can be passed to AI for further processing to increase efficiency and reliability, for AI can follow any formulated algorithm exactly, and provide (an imagery context knowledge base as) a constraint on further NI neurocomputing. Such integration, with a dialogue between the two, is believed actually to happen in the human visual system in tracking of moving objects, for instance. Consequently, we propose that both AI and NI are two sides of an intelligence coin that can together solve the pattern recognition problem and unify an intelligent machine. The rule-based minimax cost function used for feature search can, furthermore, give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical minimax cost function consists of the sample variance of each class in the numerator, and separation of the centers of classes in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously. This Taylor expansion variable must be a one-dimensional array that traces along a space-filling curve which, as Peano proved, preserves two-dimensional neighborhood relationships. Therefore, this one-dimensional array can support Taylor series expansion in terms of neighborhood derivatives. An adaptive space-filling capability is postulated for useful neuronic representations by using a top layer neural network, similar to Adaptive Resonance Theory, when more detailed spatial resolution becomes desirable at the place in the picture where an interesting change occurs. A self-consistent perturbation expansion can speed up the training procedure. A hairy neuron model that has two internal degrees of freedom is useful to determine a dynamically self-reconfigurable architecture. The convergence theorem of such a morphology is given for a hairy neural network under arbitrary time scales.
ISSN:0736-5853
1879-324X
DOI:10.1016/S0736-5853(05)80017-6