An efficient method for VLSI implementation of useful neural network activation functions
A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core com...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Andrew Stephen Cassidy John Vernon Arthur Pallab Datta Jennifer Klamo Steven Kyle Esser Rathinakumar Appuswamy Filipp Akopyan Myron D Flickner Jun Sawada Dharmendra S Modha Carlos Ortega Otero Brian Seisho Taba |
description | A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_GB2606600A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>GB2606600A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_GB2606600A3</originalsourceid><addsrcrecordid>eNqFi7sKAjEQRdNYiPoNzg8IQSH9Ku4q2CmC1RLiDAaTTMhDf99dtLc6B-65U3FrAiCRNRZDAY_lwXcgTnA9nY9gfXToh0UXywGYoGak6iBgTXpEeXN6gjbFvr4N1WBGyXMxIe0yLn6ciWW7v-wOK4zcY47a4HDvu-1aSaWkbDb_iw_5BToB</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>An efficient method for VLSI implementation of useful neural network activation functions</title><source>esp@cenet</source><creator>Andrew Stephen Cassidy ; John Vernon Arthur ; Pallab Datta ; Jennifer Klamo ; Steven Kyle Esser ; Rathinakumar Appuswamy ; Filipp Akopyan ; Myron D Flickner ; Jun Sawada ; Dharmendra S Modha ; Carlos Ortega Otero ; Brian Seisho Taba</creator><creatorcontrib>Andrew Stephen Cassidy ; John Vernon Arthur ; Pallab Datta ; Jennifer Klamo ; Steven Kyle Esser ; Rathinakumar Appuswamy ; Filipp Akopyan ; Myron D Flickner ; Jun Sawada ; Dharmendra S Modha ; Carlos Ortega Otero ; Brian Seisho Taba</creatorcontrib><description>A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20221116&DB=EPODOC&CC=GB&NR=2606600A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20221116&DB=EPODOC&CC=GB&NR=2606600A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Andrew Stephen Cassidy</creatorcontrib><creatorcontrib>John Vernon Arthur</creatorcontrib><creatorcontrib>Pallab Datta</creatorcontrib><creatorcontrib>Jennifer Klamo</creatorcontrib><creatorcontrib>Steven Kyle Esser</creatorcontrib><creatorcontrib>Rathinakumar Appuswamy</creatorcontrib><creatorcontrib>Filipp Akopyan</creatorcontrib><creatorcontrib>Myron D Flickner</creatorcontrib><creatorcontrib>Jun Sawada</creatorcontrib><creatorcontrib>Dharmendra S Modha</creatorcontrib><creatorcontrib>Carlos Ortega Otero</creatorcontrib><creatorcontrib>Brian Seisho Taba</creatorcontrib><title>An efficient method for VLSI implementation of useful neural network activation functions</title><description>A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqFi7sKAjEQRdNYiPoNzg8IQSH9Ku4q2CmC1RLiDAaTTMhDf99dtLc6B-65U3FrAiCRNRZDAY_lwXcgTnA9nY9gfXToh0UXywGYoGak6iBgTXpEeXN6gjbFvr4N1WBGyXMxIe0yLn6ciWW7v-wOK4zcY47a4HDvu-1aSaWkbDb_iw_5BToB</recordid><startdate>20221116</startdate><enddate>20221116</enddate><creator>Andrew Stephen Cassidy</creator><creator>John Vernon Arthur</creator><creator>Pallab Datta</creator><creator>Jennifer Klamo</creator><creator>Steven Kyle Esser</creator><creator>Rathinakumar Appuswamy</creator><creator>Filipp Akopyan</creator><creator>Myron D Flickner</creator><creator>Jun Sawada</creator><creator>Dharmendra S Modha</creator><creator>Carlos Ortega Otero</creator><creator>Brian Seisho Taba</creator><scope>EVB</scope></search><sort><creationdate>20221116</creationdate><title>An efficient method for VLSI implementation of useful neural network activation functions</title><author>Andrew Stephen Cassidy ; John Vernon Arthur ; Pallab Datta ; Jennifer Klamo ; Steven Kyle Esser ; Rathinakumar Appuswamy ; Filipp Akopyan ; Myron D Flickner ; Jun Sawada ; Dharmendra S Modha ; Carlos Ortega Otero ; Brian Seisho Taba</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_GB2606600A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Andrew Stephen Cassidy</creatorcontrib><creatorcontrib>John Vernon Arthur</creatorcontrib><creatorcontrib>Pallab Datta</creatorcontrib><creatorcontrib>Jennifer Klamo</creatorcontrib><creatorcontrib>Steven Kyle Esser</creatorcontrib><creatorcontrib>Rathinakumar Appuswamy</creatorcontrib><creatorcontrib>Filipp Akopyan</creatorcontrib><creatorcontrib>Myron D Flickner</creatorcontrib><creatorcontrib>Jun Sawada</creatorcontrib><creatorcontrib>Dharmendra S Modha</creatorcontrib><creatorcontrib>Carlos Ortega Otero</creatorcontrib><creatorcontrib>Brian Seisho Taba</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Andrew Stephen Cassidy</au><au>John Vernon Arthur</au><au>Pallab Datta</au><au>Jennifer Klamo</au><au>Steven Kyle Esser</au><au>Rathinakumar Appuswamy</au><au>Filipp Akopyan</au><au>Myron D Flickner</au><au>Jun Sawada</au><au>Dharmendra S Modha</au><au>Carlos Ortega Otero</au><au>Brian Seisho Taba</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>An efficient method for VLSI implementation of useful neural network activation functions</title><date>2022-11-16</date><risdate>2022</risdate><abstract>A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_GB2606600A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | An efficient method for VLSI implementation of useful neural network activation functions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T09%3A23%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Andrew%20Stephen%20Cassidy&rft.date=2022-11-16&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EGB2606600A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |