Global collaboration through local interaction in competitive learning
Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered th...
Gespeichert in:
Veröffentlicht in: | Neural networks 2020-03, Vol.123, p.393-400 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 400 |
---|---|
container_issue | |
container_start_page | 393 |
container_title | Neural networks |
container_volume | 123 |
creator | Siddiqui, Abbas Georgiadis, Dionysios |
description | Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered through strictly local interactions. Enforcing uniformity of map quality across all agents results in an algorithm that is able to consistently uncover the global topology of diversely challenging datasets. The applicability and scalability of this approach is further tested on a large point cloud dataset, revealing a linear relation between map training time and size. The presented work not only reduces algorithmic complexity but also constitutes first step towards a distributed self organizing map.
•Locally interacting self-organizing maps are incapable of preserving global data topology.•We show that enforcing uniform quality throughout the map resolves this issue.•Uniformity is enforced through localized, negative feedbacks by coupling map quality and learning rate.•The optimal range for the feedback hyper-parameter lies in a trade off between map stability and quality.•We show empirically that the algorithm’s training time increases linearly with map size, using a large point cloud dataset. |
doi_str_mv | 10.1016/j.neunet.2019.12.018 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2336254270</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608019304186</els_id><sourcerecordid>2336254270</sourcerecordid><originalsourceid>FETCH-LOGICAL-c362t-80d2b79969be7670093d39d7deaee72b60d25c1cfbfa2e2e2d6e2125b77a4c83</originalsourceid><addsrcrecordid>eNp9kM1OAjEURhujEUTfwBiWbmbsz9BONyaGCJqQuGHfdDoXKBlabDskvr1F0KXpoovv3H63B6F7gkuCCX_alg56B6mkmMiS0BKT-gINSS1kQUVNL9EQ15IVHNd4gG5i3GKMeV2xazRgRFJecTZEs3nnG92Nje863figk_VunDbB9-vNuPMmZ9YlCNr8JNZldLeHZJM9wLgDHZx161t0tdJdhLvzPULL2ety-lYsPubv05dFYRinqahxSxshJZcNCC4wlqxlshUtaABBG57ziSFm1aw0hXxaDpTQSSOErkzNRujx9Ow--M8eYlI7Gw3k1R34PirKcs2kogJntDqhJvgYA6zUPtidDl-KYHUUqLbqJFAdBSpCVRaYxx7ODX2zg_Zv6NdYBp5PAORvHiwEFY0FZ6C1AUxSrbf_N3wDUqmEvQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2336254270</pqid></control><display><type>article</type><title>Global collaboration through local interaction in competitive learning</title><source>Elsevier ScienceDirect Journals</source><creator>Siddiqui, Abbas ; Georgiadis, Dionysios</creator><creatorcontrib>Siddiqui, Abbas ; Georgiadis, Dionysios</creatorcontrib><description>Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered through strictly local interactions. Enforcing uniformity of map quality across all agents results in an algorithm that is able to consistently uncover the global topology of diversely challenging datasets. The applicability and scalability of this approach is further tested on a large point cloud dataset, revealing a linear relation between map training time and size. The presented work not only reduces algorithmic complexity but also constitutes first step towards a distributed self organizing map.
•Locally interacting self-organizing maps are incapable of preserving global data topology.•We show that enforcing uniform quality throughout the map resolves this issue.•Uniformity is enforced through localized, negative feedbacks by coupling map quality and learning rate.•The optimal range for the feedback hyper-parameter lies in a trade off between map stability and quality.•We show empirically that the algorithm’s training time increases linearly with map size, using a large point cloud dataset.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2019.12.018</identifier><identifier>PMID: 31926463</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Competitive & collaborative learning ; Locally interacting SOM ; Point cloud estimation ; Topologically preserving maps</subject><ispartof>Neural networks, 2020-03, Vol.123, p.393-400</ispartof><rights>2019 Elsevier Ltd</rights><rights>Copyright © 2019 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c362t-80d2b79969be7670093d39d7deaee72b60d25c1cfbfa2e2e2d6e2125b77a4c83</citedby><cites>FETCH-LOGICAL-c362t-80d2b79969be7670093d39d7deaee72b60d25c1cfbfa2e2e2d6e2125b77a4c83</cites><orcidid>0000-0002-3672-6140</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2019.12.018$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3536,27903,27904,45974</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31926463$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Siddiqui, Abbas</creatorcontrib><creatorcontrib>Georgiadis, Dionysios</creatorcontrib><title>Global collaboration through local interaction in competitive learning</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered through strictly local interactions. Enforcing uniformity of map quality across all agents results in an algorithm that is able to consistently uncover the global topology of diversely challenging datasets. The applicability and scalability of this approach is further tested on a large point cloud dataset, revealing a linear relation between map training time and size. The presented work not only reduces algorithmic complexity but also constitutes first step towards a distributed self organizing map.
•Locally interacting self-organizing maps are incapable of preserving global data topology.•We show that enforcing uniform quality throughout the map resolves this issue.•Uniformity is enforced through localized, negative feedbacks by coupling map quality and learning rate.•The optimal range for the feedback hyper-parameter lies in a trade off between map stability and quality.•We show empirically that the algorithm’s training time increases linearly with map size, using a large point cloud dataset.</description><subject>Competitive & collaborative learning</subject><subject>Locally interacting SOM</subject><subject>Point cloud estimation</subject><subject>Topologically preserving maps</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kM1OAjEURhujEUTfwBiWbmbsz9BONyaGCJqQuGHfdDoXKBlabDskvr1F0KXpoovv3H63B6F7gkuCCX_alg56B6mkmMiS0BKT-gINSS1kQUVNL9EQ15IVHNd4gG5i3GKMeV2xazRgRFJecTZEs3nnG92Nje863figk_VunDbB9-vNuPMmZ9YlCNr8JNZldLeHZJM9wLgDHZx161t0tdJdhLvzPULL2ety-lYsPubv05dFYRinqahxSxshJZcNCC4wlqxlshUtaABBG57ziSFm1aw0hXxaDpTQSSOErkzNRujx9Ow--M8eYlI7Gw3k1R34PirKcs2kogJntDqhJvgYA6zUPtidDl-KYHUUqLbqJFAdBSpCVRaYxx7ODX2zg_Zv6NdYBp5PAORvHiwEFY0FZ6C1AUxSrbf_N3wDUqmEvQ</recordid><startdate>202003</startdate><enddate>202003</enddate><creator>Siddiqui, Abbas</creator><creator>Georgiadis, Dionysios</creator><general>Elsevier Ltd</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3672-6140</orcidid></search><sort><creationdate>202003</creationdate><title>Global collaboration through local interaction in competitive learning</title><author>Siddiqui, Abbas ; Georgiadis, Dionysios</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c362t-80d2b79969be7670093d39d7deaee72b60d25c1cfbfa2e2e2d6e2125b77a4c83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Competitive & collaborative learning</topic><topic>Locally interacting SOM</topic><topic>Point cloud estimation</topic><topic>Topologically preserving maps</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Siddiqui, Abbas</creatorcontrib><creatorcontrib>Georgiadis, Dionysios</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Siddiqui, Abbas</au><au>Georgiadis, Dionysios</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Global collaboration through local interaction in competitive learning</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2020-03</date><risdate>2020</risdate><volume>123</volume><spage>393</spage><epage>400</epage><pages>393-400</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered through strictly local interactions. Enforcing uniformity of map quality across all agents results in an algorithm that is able to consistently uncover the global topology of diversely challenging datasets. The applicability and scalability of this approach is further tested on a large point cloud dataset, revealing a linear relation between map training time and size. The presented work not only reduces algorithmic complexity but also constitutes first step towards a distributed self organizing map.
•Locally interacting self-organizing maps are incapable of preserving global data topology.•We show that enforcing uniform quality throughout the map resolves this issue.•Uniformity is enforced through localized, negative feedbacks by coupling map quality and learning rate.•The optimal range for the feedback hyper-parameter lies in a trade off between map stability and quality.•We show empirically that the algorithm’s training time increases linearly with map size, using a large point cloud dataset.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>31926463</pmid><doi>10.1016/j.neunet.2019.12.018</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-3672-6140</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0893-6080 |
ispartof | Neural networks, 2020-03, Vol.123, p.393-400 |
issn | 0893-6080 1879-2782 |
language | eng |
recordid | cdi_proquest_miscellaneous_2336254270 |
source | Elsevier ScienceDirect Journals |
subjects | Competitive & collaborative learning Locally interacting SOM Point cloud estimation Topologically preserving maps |
title | Global collaboration through local interaction in competitive learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T05%3A02%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Global%20collaboration%20through%20local%20interaction%20in%20competitive%20learning&rft.jtitle=Neural%20networks&rft.au=Siddiqui,%20Abbas&rft.date=2020-03&rft.volume=123&rft.spage=393&rft.epage=400&rft.pages=393-400&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2019.12.018&rft_dat=%3Cproquest_cross%3E2336254270%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2336254270&rft_id=info:pmid/31926463&rft_els_id=S0893608019304186&rfr_iscdi=true |