Global collaboration through local interaction in competitive learning
Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered th...
Gespeichert in:
Veröffentlicht in: | Neural networks 2020-03, Vol.123, p.393-400 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered through strictly local interactions. Enforcing uniformity of map quality across all agents results in an algorithm that is able to consistently uncover the global topology of diversely challenging datasets. The applicability and scalability of this approach is further tested on a large point cloud dataset, revealing a linear relation between map training time and size. The presented work not only reduces algorithmic complexity but also constitutes first step towards a distributed self organizing map.
•Locally interacting self-organizing maps are incapable of preserving global data topology.•We show that enforcing uniform quality throughout the map resolves this issue.•Uniformity is enforced through localized, negative feedbacks by coupling map quality and learning rate.•The optimal range for the feedback hyper-parameter lies in a trade off between map stability and quality.•We show empirically that the algorithm’s training time increases linearly with map size, using a large point cloud dataset. |
---|---|
ISSN: | 0893-6080 1879-2782 |
DOI: | 10.1016/j.neunet.2019.12.018 |