Dictionary Learning with Accumulator Neurons

The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor. Here, we focus on the probl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Parpart, Gavin, Gonzalez, Carlos, Stewart, Terrence C, Kim, Edward, Rego, Jocelyn, O'Brien, Andrew, Nesbit, Steven, Kenyon, Garrett T, Watkins, Yijing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Parpart, Gavin
Gonzalez, Carlos
Stewart, Terrence C
Kim, Edward
Rego, Jocelyn
O'Brien, Andrew
Nesbit, Steven
Kenyon, Garrett T
Watkins, Yijing
description The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (\hbox{S-LCA}) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (\hbox{LIF}) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.
doi_str_mv 10.48550/arxiv.2205.15386
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_15386</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_15386</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-203fc9bfdbebfc4598cfe2b0abb5c0702a7e3219fb89e361379544e280e9f3cb3</originalsourceid><addsrcrecordid>eNotzrsOgjAYQOEuDkZ9ACd5AMHSC7SjwWtCdGEnbf2rTRRMAS9v73U628mH0DjGEROc45nyD3eLCME8ijkVSR9NF860rq6UfwY5KF-56hjcXXsK5sZ0l-6s2toHO-h8XTVD1LPq3MDo3wEqVssi24T5fr3N5nmokjQJCabWSG0PGrQ1jEthLBCNldbc4BQTlQIlsbRaSKBJTFPJGQMiMEhLjaYDNPltv9zy6t3lzSs_7PLLpi95nT1l</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dictionary Learning with Accumulator Neurons</title><source>arXiv.org</source><creator>Parpart, Gavin ; Gonzalez, Carlos ; Stewart, Terrence C ; Kim, Edward ; Rego, Jocelyn ; O'Brien, Andrew ; Nesbit, Steven ; Kenyon, Garrett T ; Watkins, Yijing</creator><creatorcontrib>Parpart, Gavin ; Gonzalez, Carlos ; Stewart, Terrence C ; Kim, Edward ; Rego, Jocelyn ; O'Brien, Andrew ; Nesbit, Steven ; Kenyon, Garrett T ; Watkins, Yijing</creatorcontrib><description>The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (\hbox{S-LCA}) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (\hbox{LIF}) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.</description><identifier>DOI: 10.48550/arxiv.2205.15386</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.15386$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.15386$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Parpart, Gavin</creatorcontrib><creatorcontrib>Gonzalez, Carlos</creatorcontrib><creatorcontrib>Stewart, Terrence C</creatorcontrib><creatorcontrib>Kim, Edward</creatorcontrib><creatorcontrib>Rego, Jocelyn</creatorcontrib><creatorcontrib>O'Brien, Andrew</creatorcontrib><creatorcontrib>Nesbit, Steven</creatorcontrib><creatorcontrib>Kenyon, Garrett T</creatorcontrib><creatorcontrib>Watkins, Yijing</creatorcontrib><title>Dictionary Learning with Accumulator Neurons</title><description>The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (\hbox{S-LCA}) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (\hbox{LIF}) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsOgjAYQOEuDkZ9ACd5AMHSC7SjwWtCdGEnbf2rTRRMAS9v73U628mH0DjGEROc45nyD3eLCME8ijkVSR9NF860rq6UfwY5KF-56hjcXXsK5sZ0l-6s2toHO-h8XTVD1LPq3MDo3wEqVssi24T5fr3N5nmokjQJCabWSG0PGrQ1jEthLBCNldbc4BQTlQIlsbRaSKBJTFPJGQMiMEhLjaYDNPltv9zy6t3lzSs_7PLLpi95nT1l</recordid><startdate>20220530</startdate><enddate>20220530</enddate><creator>Parpart, Gavin</creator><creator>Gonzalez, Carlos</creator><creator>Stewart, Terrence C</creator><creator>Kim, Edward</creator><creator>Rego, Jocelyn</creator><creator>O'Brien, Andrew</creator><creator>Nesbit, Steven</creator><creator>Kenyon, Garrett T</creator><creator>Watkins, Yijing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220530</creationdate><title>Dictionary Learning with Accumulator Neurons</title><author>Parpart, Gavin ; Gonzalez, Carlos ; Stewart, Terrence C ; Kim, Edward ; Rego, Jocelyn ; O'Brien, Andrew ; Nesbit, Steven ; Kenyon, Garrett T ; Watkins, Yijing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-203fc9bfdbebfc4598cfe2b0abb5c0702a7e3219fb89e361379544e280e9f3cb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Parpart, Gavin</creatorcontrib><creatorcontrib>Gonzalez, Carlos</creatorcontrib><creatorcontrib>Stewart, Terrence C</creatorcontrib><creatorcontrib>Kim, Edward</creatorcontrib><creatorcontrib>Rego, Jocelyn</creatorcontrib><creatorcontrib>O'Brien, Andrew</creatorcontrib><creatorcontrib>Nesbit, Steven</creatorcontrib><creatorcontrib>Kenyon, Garrett T</creatorcontrib><creatorcontrib>Watkins, Yijing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Parpart, Gavin</au><au>Gonzalez, Carlos</au><au>Stewart, Terrence C</au><au>Kim, Edward</au><au>Rego, Jocelyn</au><au>O'Brien, Andrew</au><au>Nesbit, Steven</au><au>Kenyon, Garrett T</au><au>Watkins, Yijing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dictionary Learning with Accumulator Neurons</atitle><date>2022-05-30</date><risdate>2022</risdate><abstract>The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (\hbox{S-LCA}) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (\hbox{LIF}) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.</abstract><doi>10.48550/arxiv.2205.15386</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2205.15386
ispartof
issn
language eng
recordid cdi_arxiv_primary_2205_15386
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Dictionary Learning with Accumulator Neurons
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T22%3A38%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dictionary%20Learning%20with%20Accumulator%20Neurons&rft.au=Parpart,%20Gavin&rft.date=2022-05-30&rft_id=info:doi/10.48550/arxiv.2205.15386&rft_dat=%3Carxiv_GOX%3E2205_15386%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true