Connectionist Architectures for Artificial Intelligence
Features of connectionist architectures (CA) in massively parallel computers for AI applications are discussed. A CA involves a large number of simple processors, each connected to a number of the other processors. Each processor has only a small amount of memory, yet the array can cumulatively stor...
Gespeichert in:
Veröffentlicht in: | Computer (Long Beach, Calif.) Calif.), 1987-01, Vol.20 (1), p.100-109 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Features of connectionist architectures (CA) in massively parallel computers for AI applications are discussed. A CA involves a large number of simple processors, each connected to a number of the other processors. Each processor has only a small amount of memory, yet the array can cumulatively store a large amount of data which can be altered by changing the connections among the processors. Each processor is also limited to a few simple arithmetic or Boolean operations. A sufficient number of processors must be available for processing the subtasks of any task assigned the machine. Approaches for performing pattern recognition and learning tasks with CAs are explored. Consideration is given to the NETL system, which uses local representations and marker-passing techniques, a value-passing system, back-propagation, constraint-satisfication in iterative networks, and the Boltzmann learning scheme. (M.S.K.) |
---|---|
ISSN: | 0018-9162 1558-0814 |
DOI: | 10.1109/MC.1987.1663364 |