A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing archit...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2017-03 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Smith, Michael R Hill, Aaron J Carlson, Kristo D Vineyard, Craig M Donaldson, Jonathon Follett, David R Follett, Pamela L Naegle, John H James, Conrad D Aimone, James B |
description | Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2074100830</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2074100830</sourcerecordid><originalsourceid>FETCH-proquest_journals_20741008303</originalsourceid><addsrcrecordid>eNqNjcFqQjEQRUNBqFT_YcC1EBOtbh_qw0XrQt1LiPN0JCYxmUBd-ufNwg9wdS_cc7gfoq-0nowXU6U-xTDnq5RSfc_VbKb74tnAis7ExsEWSwq3kOKFLDTJXojRckkI664jS-jZPaA1llzlmfwZluEWHf7B_uFN5KrtMMfgM0JbvGWqFZoYHeEJOMAP3QudYF9thF9THzzmgeh1xmUcvvJLjNr1YbkZxxTuBTMfr6EkX6ejkvPpRMqFlvo96h_FKFF0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2074100830</pqid></control><display><type>article</type><title>A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines</title><source>Free E- Journals</source><creator>Smith, Michael R ; Hill, Aaron J ; Carlson, Kristo D ; Vineyard, Craig M ; Donaldson, Jonathon ; Follett, David R ; Follett, Pamela L ; Naegle, John H ; James, Conrad D ; Aimone, James B</creator><creatorcontrib>Smith, Michael R ; Hill, Aaron J ; Carlson, Kristo D ; Vineyard, Craig M ; Donaldson, Jonathon ; Follett, David R ; Follett, Pamela L ; Naegle, John H ; James, Conrad D ; Aimone, James B</creatorcontrib><description>Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer architecture ; Computer simulation ; Mathematical analysis ; Matrix algebra ; Matrix methods ; Neural networks ; Neurons ; Neurotransmitters ; Nonlinear response ; Response functions ; Speech recognition ; Spiking ; State machines ; Synapses</subject><ispartof>arXiv.org, 2017-03</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Smith, Michael R</creatorcontrib><creatorcontrib>Hill, Aaron J</creatorcontrib><creatorcontrib>Carlson, Kristo D</creatorcontrib><creatorcontrib>Vineyard, Craig M</creatorcontrib><creatorcontrib>Donaldson, Jonathon</creatorcontrib><creatorcontrib>Follett, David R</creatorcontrib><creatorcontrib>Follett, Pamela L</creatorcontrib><creatorcontrib>Naegle, John H</creatorcontrib><creatorcontrib>James, Conrad D</creatorcontrib><creatorcontrib>Aimone, James B</creatorcontrib><title>A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines</title><title>arXiv.org</title><description>Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.</description><subject>Algorithms</subject><subject>Computer architecture</subject><subject>Computer simulation</subject><subject>Mathematical analysis</subject><subject>Matrix algebra</subject><subject>Matrix methods</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Neurotransmitters</subject><subject>Nonlinear response</subject><subject>Response functions</subject><subject>Speech recognition</subject><subject>Spiking</subject><subject>State machines</subject><subject>Synapses</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjcFqQjEQRUNBqFT_YcC1EBOtbh_qw0XrQt1LiPN0JCYxmUBd-ufNwg9wdS_cc7gfoq-0nowXU6U-xTDnq5RSfc_VbKb74tnAis7ExsEWSwq3kOKFLDTJXojRckkI664jS-jZPaA1llzlmfwZluEWHf7B_uFN5KrtMMfgM0JbvGWqFZoYHeEJOMAP3QudYF9thF9THzzmgeh1xmUcvvJLjNr1YbkZxxTuBTMfr6EkX6ejkvPpRMqFlvo96h_FKFF0</recordid><startdate>20170321</startdate><enddate>20170321</enddate><creator>Smith, Michael R</creator><creator>Hill, Aaron J</creator><creator>Carlson, Kristo D</creator><creator>Vineyard, Craig M</creator><creator>Donaldson, Jonathon</creator><creator>Follett, David R</creator><creator>Follett, Pamela L</creator><creator>Naegle, John H</creator><creator>James, Conrad D</creator><creator>Aimone, James B</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20170321</creationdate><title>A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines</title><author>Smith, Michael R ; Hill, Aaron J ; Carlson, Kristo D ; Vineyard, Craig M ; Donaldson, Jonathon ; Follett, David R ; Follett, Pamela L ; Naegle, John H ; James, Conrad D ; Aimone, James B</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20741008303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithms</topic><topic>Computer architecture</topic><topic>Computer simulation</topic><topic>Mathematical analysis</topic><topic>Matrix algebra</topic><topic>Matrix methods</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Neurotransmitters</topic><topic>Nonlinear response</topic><topic>Response functions</topic><topic>Speech recognition</topic><topic>Spiking</topic><topic>State machines</topic><topic>Synapses</topic><toplevel>online_resources</toplevel><creatorcontrib>Smith, Michael R</creatorcontrib><creatorcontrib>Hill, Aaron J</creatorcontrib><creatorcontrib>Carlson, Kristo D</creatorcontrib><creatorcontrib>Vineyard, Craig M</creatorcontrib><creatorcontrib>Donaldson, Jonathon</creatorcontrib><creatorcontrib>Follett, David R</creatorcontrib><creatorcontrib>Follett, Pamela L</creatorcontrib><creatorcontrib>Naegle, John H</creatorcontrib><creatorcontrib>James, Conrad D</creatorcontrib><creatorcontrib>Aimone, James B</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Smith, Michael R</au><au>Hill, Aaron J</au><au>Carlson, Kristo D</au><au>Vineyard, Craig M</au><au>Donaldson, Jonathon</au><au>Follett, David R</au><au>Follett, Pamela L</au><au>Naegle, John H</au><au>James, Conrad D</au><au>Aimone, James B</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines</atitle><jtitle>arXiv.org</jtitle><date>2017-03-21</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2017-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2074100830 |
source | Free E- Journals |
subjects | Algorithms Computer architecture Computer simulation Mathematical analysis Matrix algebra Matrix methods Neural networks Neurons Neurotransmitters Nonlinear response Response functions Speech recognition Spiking State machines Synapses |
title | A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T02%3A34%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=A%20Digital%20Neuromorphic%20Architecture%20Efficiently%20Facilitating%20Complex%20Synaptic%20Response%20Functions%20Applied%20to%20Liquid%20State%20Machines&rft.jtitle=arXiv.org&rft.au=Smith,%20Michael%20R&rft.date=2017-03-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2074100830%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2074100830&rft_id=info:pmid/&rfr_iscdi=true |