Locality-Based Encoder and Model Quantization for Efficient Hyper-Dimensional Computing

Brain-inspired hyper-dimensional (HD) computing is a new computing paradigm emulating the neuron's activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10 000), which requires the computation of thousands of operations f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computer-aided design of integrated circuits and systems 2022-04, Vol.41 (4), p.897-907
Hauptverfasser: Morris, Justin, Fernando, Roshan, Hao, Yilun, Imani, Mohsen, Aksanli, Baris, Rosing, Tajana
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 907
container_issue 4
container_start_page 897
container_title IEEE transactions on computer-aided design of integrated circuits and systems
container_volume 41
creator Morris, Justin
Fernando, Roshan
Hao, Yilun
Imani, Mohsen
Aksanli, Baris
Rosing, Tajana
description Brain-inspired hyper-dimensional (HD) computing is a new computing paradigm emulating the neuron's activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10 000), which requires the computation of thousands of operations for each element of data in the original domain. Encoding alone takes about 80% of the execution time of training. In this article, we propose, ReHD, an entire rework of encoding, training, and inference in HD computing for a more hardware friendly implementation. ReHD includes a full binary encoding module for HD computing for energy-efficient and high-accuracy classification. Our encoding module based on random projection with a predictable memory access pattern can be efficiently implemented in hardware. ReHD is the first HD-based approach that provides data projection with a 1:1 ratio to the original data and enables all training/inference computation to be performed using binary hypervectors. After the optimizations ReHD adds to the encoding process, retraining and inference become the energy intensive part of HD computing. To resolve this, we additionally propose model quantization. Model quantization introduces a novel method of storing class hypervectors using n -bits, where n ranges from 1 to 32, rather than at full 32-bit precision, which allows for fine-grained tuning of the tradeoff between energy efficiency and accuracy. To further improve ReHD efficiency, we developed an online dimension reduction approach that removes insignificant hypervector dimensions during training.
doi_str_mv 10.1109/TCAD.2021.3069139
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2640430326</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9388914</ieee_id><sourcerecordid>2640430326</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-8fddf1cadb2ae4dab14480f031f518fe2358bf15f6703f2e9799683643b129f33</originalsourceid><addsrcrecordid>eNo9kNFKwzAUhoMoOKcPIN4EvO48J0nb5HJ20wkTESZehrRNJKNrZ9JezKe3Y-LV-eF8_-HwEXKLMEME9bAp5osZA4YzDplCrs7IBBXPE4EpnpMJsFwmADlckqsYtwAoUqYm5HPdVabx_SF5NNHWdNlWXW0DNW1NX8fU0PfBtL3_Mb3vWuq6QJfO-crbtqerw96GZOF3to3j1jS06Hb7offt1zW5cKaJ9uZvTsnH03JTrJL12_NLMV8nFVO8T6Sra4eVqUtmrKhNiUJIcMDRpSidZTyVpcPUZTlwx6zKlcokzwQvkSnH-ZTcn-7uQ_c92NjrbTeE8ZWoWSZAcOAsGyk8UVXoYgzW6X3wOxMOGkEf_emjP330p__8jZ27U8dba_95xaVUKPgve0Jr1A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2640430326</pqid></control><display><type>article</type><title>Locality-Based Encoder and Model Quantization for Efficient Hyper-Dimensional Computing</title><source>IEEE Electronic Library (IEL)</source><creator>Morris, Justin ; Fernando, Roshan ; Hao, Yilun ; Imani, Mohsen ; Aksanli, Baris ; Rosing, Tajana</creator><creatorcontrib>Morris, Justin ; Fernando, Roshan ; Hao, Yilun ; Imani, Mohsen ; Aksanli, Baris ; Rosing, Tajana</creatorcontrib><description><![CDATA[Brain-inspired hyper-dimensional (HD) computing is a new computing paradigm emulating the neuron's activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10 000), which requires the computation of thousands of operations for each element of data in the original domain. Encoding alone takes about 80% of the execution time of training. In this article, we propose, ReHD, an entire rework of encoding, training, and inference in HD computing for a more hardware friendly implementation. ReHD includes a full binary encoding module for HD computing for energy-efficient and high-accuracy classification. Our encoding module based on random projection with a predictable memory access pattern can be efficiently implemented in hardware. ReHD is the first HD-based approach that provides data projection with a 1:1 ratio to the original data and enables all training/inference computation to be performed using binary hypervectors. After the optimizations ReHD adds to the encoding process, retraining and inference become the energy intensive part of HD computing. To resolve this, we additionally propose model quantization. Model quantization introduces a novel method of storing class hypervectors using <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula>-bits, where <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> ranges from 1 to 32, rather than at full 32-bit precision, which allows for fine-grained tuning of the tradeoff between energy efficiency and accuracy. To further improve ReHD efficiency, we developed an online dimension reduction approach that removes insignificant hypervector dimensions during training.]]></description><identifier>ISSN: 0278-0070</identifier><identifier>EISSN: 1937-4151</identifier><identifier>DOI: 10.1109/TCAD.2021.3069139</identifier><identifier>CODEN: ITCSDI</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accuracy ; Brain-inspired computing ; Coders ; Computation ; Computational modeling ; Data models ; Data points ; Encoding ; Energy efficiency ; Forecasting ; Hardware ; hyper-dimensional (HD) computing ; Inference ; machine learning ; Measurement ; Modules ; Quantization (signal) ; Task analysis ; Training</subject><ispartof>IEEE transactions on computer-aided design of integrated circuits and systems, 2022-04, Vol.41 (4), p.897-907</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-8fddf1cadb2ae4dab14480f031f518fe2358bf15f6703f2e9799683643b129f33</citedby><cites>FETCH-LOGICAL-c293t-8fddf1cadb2ae4dab14480f031f518fe2358bf15f6703f2e9799683643b129f33</cites><orcidid>0000-0002-7921-2561 ; 0000-0002-5761-0622 ; 0000-0002-9140-4056 ; 0000-0001-6347-9061</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9388914$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9388914$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Morris, Justin</creatorcontrib><creatorcontrib>Fernando, Roshan</creatorcontrib><creatorcontrib>Hao, Yilun</creatorcontrib><creatorcontrib>Imani, Mohsen</creatorcontrib><creatorcontrib>Aksanli, Baris</creatorcontrib><creatorcontrib>Rosing, Tajana</creatorcontrib><title>Locality-Based Encoder and Model Quantization for Efficient Hyper-Dimensional Computing</title><title>IEEE transactions on computer-aided design of integrated circuits and systems</title><addtitle>TCAD</addtitle><description><![CDATA[Brain-inspired hyper-dimensional (HD) computing is a new computing paradigm emulating the neuron's activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10 000), which requires the computation of thousands of operations for each element of data in the original domain. Encoding alone takes about 80% of the execution time of training. In this article, we propose, ReHD, an entire rework of encoding, training, and inference in HD computing for a more hardware friendly implementation. ReHD includes a full binary encoding module for HD computing for energy-efficient and high-accuracy classification. Our encoding module based on random projection with a predictable memory access pattern can be efficiently implemented in hardware. ReHD is the first HD-based approach that provides data projection with a 1:1 ratio to the original data and enables all training/inference computation to be performed using binary hypervectors. After the optimizations ReHD adds to the encoding process, retraining and inference become the energy intensive part of HD computing. To resolve this, we additionally propose model quantization. Model quantization introduces a novel method of storing class hypervectors using <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula>-bits, where <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> ranges from 1 to 32, rather than at full 32-bit precision, which allows for fine-grained tuning of the tradeoff between energy efficiency and accuracy. To further improve ReHD efficiency, we developed an online dimension reduction approach that removes insignificant hypervector dimensions during training.]]></description><subject>Accuracy</subject><subject>Brain-inspired computing</subject><subject>Coders</subject><subject>Computation</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>Data points</subject><subject>Encoding</subject><subject>Energy efficiency</subject><subject>Forecasting</subject><subject>Hardware</subject><subject>hyper-dimensional (HD) computing</subject><subject>Inference</subject><subject>machine learning</subject><subject>Measurement</subject><subject>Modules</subject><subject>Quantization (signal)</subject><subject>Task analysis</subject><subject>Training</subject><issn>0278-0070</issn><issn>1937-4151</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kNFKwzAUhoMoOKcPIN4EvO48J0nb5HJ20wkTESZehrRNJKNrZ9JezKe3Y-LV-eF8_-HwEXKLMEME9bAp5osZA4YzDplCrs7IBBXPE4EpnpMJsFwmADlckqsYtwAoUqYm5HPdVabx_SF5NNHWdNlWXW0DNW1NX8fU0PfBtL3_Mb3vWuq6QJfO-crbtqerw96GZOF3to3j1jS06Hb7offt1zW5cKaJ9uZvTsnH03JTrJL12_NLMV8nFVO8T6Sra4eVqUtmrKhNiUJIcMDRpSidZTyVpcPUZTlwx6zKlcokzwQvkSnH-ZTcn-7uQ_c92NjrbTeE8ZWoWSZAcOAsGyk8UVXoYgzW6X3wOxMOGkEf_emjP330p__8jZ27U8dba_95xaVUKPgve0Jr1A</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Morris, Justin</creator><creator>Fernando, Roshan</creator><creator>Hao, Yilun</creator><creator>Imani, Mohsen</creator><creator>Aksanli, Baris</creator><creator>Rosing, Tajana</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-7921-2561</orcidid><orcidid>https://orcid.org/0000-0002-5761-0622</orcidid><orcidid>https://orcid.org/0000-0002-9140-4056</orcidid><orcidid>https://orcid.org/0000-0001-6347-9061</orcidid></search><sort><creationdate>20220401</creationdate><title>Locality-Based Encoder and Model Quantization for Efficient Hyper-Dimensional Computing</title><author>Morris, Justin ; Fernando, Roshan ; Hao, Yilun ; Imani, Mohsen ; Aksanli, Baris ; Rosing, Tajana</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-8fddf1cadb2ae4dab14480f031f518fe2358bf15f6703f2e9799683643b129f33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Brain-inspired computing</topic><topic>Coders</topic><topic>Computation</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>Data points</topic><topic>Encoding</topic><topic>Energy efficiency</topic><topic>Forecasting</topic><topic>Hardware</topic><topic>hyper-dimensional (HD) computing</topic><topic>Inference</topic><topic>machine learning</topic><topic>Measurement</topic><topic>Modules</topic><topic>Quantization (signal)</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Morris, Justin</creatorcontrib><creatorcontrib>Fernando, Roshan</creatorcontrib><creatorcontrib>Hao, Yilun</creatorcontrib><creatorcontrib>Imani, Mohsen</creatorcontrib><creatorcontrib>Aksanli, Baris</creatorcontrib><creatorcontrib>Rosing, Tajana</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Morris, Justin</au><au>Fernando, Roshan</au><au>Hao, Yilun</au><au>Imani, Mohsen</au><au>Aksanli, Baris</au><au>Rosing, Tajana</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Locality-Based Encoder and Model Quantization for Efficient Hyper-Dimensional Computing</atitle><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle><stitle>TCAD</stitle><date>2022-04-01</date><risdate>2022</risdate><volume>41</volume><issue>4</issue><spage>897</spage><epage>907</epage><pages>897-907</pages><issn>0278-0070</issn><eissn>1937-4151</eissn><coden>ITCSDI</coden><abstract><![CDATA[Brain-inspired hyper-dimensional (HD) computing is a new computing paradigm emulating the neuron's activity in high-dimensional space. The first step in HD computing is to map each data point into high-dimensional space (e.g., 10 000), which requires the computation of thousands of operations for each element of data in the original domain. Encoding alone takes about 80% of the execution time of training. In this article, we propose, ReHD, an entire rework of encoding, training, and inference in HD computing for a more hardware friendly implementation. ReHD includes a full binary encoding module for HD computing for energy-efficient and high-accuracy classification. Our encoding module based on random projection with a predictable memory access pattern can be efficiently implemented in hardware. ReHD is the first HD-based approach that provides data projection with a 1:1 ratio to the original data and enables all training/inference computation to be performed using binary hypervectors. After the optimizations ReHD adds to the encoding process, retraining and inference become the energy intensive part of HD computing. To resolve this, we additionally propose model quantization. Model quantization introduces a novel method of storing class hypervectors using <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula>-bits, where <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> ranges from 1 to 32, rather than at full 32-bit precision, which allows for fine-grained tuning of the tradeoff between energy efficiency and accuracy. To further improve ReHD efficiency, we developed an online dimension reduction approach that removes insignificant hypervector dimensions during training.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCAD.2021.3069139</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-7921-2561</orcidid><orcidid>https://orcid.org/0000-0002-5761-0622</orcidid><orcidid>https://orcid.org/0000-0002-9140-4056</orcidid><orcidid>https://orcid.org/0000-0001-6347-9061</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0278-0070
ispartof IEEE transactions on computer-aided design of integrated circuits and systems, 2022-04, Vol.41 (4), p.897-907
issn 0278-0070
1937-4151
language eng
recordid cdi_proquest_journals_2640430326
source IEEE Electronic Library (IEL)
subjects Accuracy
Brain-inspired computing
Coders
Computation
Computational modeling
Data models
Data points
Encoding
Energy efficiency
Forecasting
Hardware
hyper-dimensional (HD) computing
Inference
machine learning
Measurement
Modules
Quantization (signal)
Task analysis
Training
title Locality-Based Encoder and Model Quantization for Efficient Hyper-Dimensional Computing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T13%3A34%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Locality-Based%20Encoder%20and%20Model%20Quantization%20for%20Efficient%20Hyper-Dimensional%20Computing&rft.jtitle=IEEE%20transactions%20on%20computer-aided%20design%20of%20integrated%20circuits%20and%20systems&rft.au=Morris,%20Justin&rft.date=2022-04-01&rft.volume=41&rft.issue=4&rft.spage=897&rft.epage=907&rft.pages=897-907&rft.issn=0278-0070&rft.eissn=1937-4151&rft.coden=ITCSDI&rft_id=info:doi/10.1109/TCAD.2021.3069139&rft_dat=%3Cproquest_RIE%3E2640430326%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2640430326&rft_id=info:pmid/&rft_ieee_id=9388914&rfr_iscdi=true