Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be depl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-01
Hauptverfasser: Manuel Le Gallo, Lammie, Corey, Buechel, Julian, Carta, Fabio, Fagbohungbe, Omobayode, Mackin, Charles, Tsai, Hsinyu, Narayanan, Vijay, Abu, Sebastian, Kaoutar El Maghraoui, Rasch, Malte J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Manuel Le Gallo
Lammie, Corey
Buechel, Julian
Carta, Fabio
Fagbohungbe, Omobayode
Mackin, Charles
Tsai, Hsinyu
Narayanan, Vijay
Abu, Sebastian
Kaoutar El Maghraoui
Rasch, Malte J
description Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.
doi_str_mv 10.48550/arxiv.2307.09357
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2307_09357</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2839579713</sourcerecordid><originalsourceid>FETCH-LOGICAL-a957-379a7749d9c8bfc9384bfdd2e0be483464640c6c42da0342acc842aa05c16fbc3</originalsourceid><addsrcrecordid>eNotkE1PwkAURScmJhLkB7hyEtfF6Xx0OkskKkTQDa6b1-kUi2UGX4vIv3dA8xZ3c3Py7iHkJmVjmSvF7gF_mu8xF0yPmRFKX5ABFyJNcsn5FRl13YYxxjPNlRIDUr53jV_T_sPR-cOSTjy0YU3nPlm6bcAjnQFWB0BHJ9a61iH0TfD0pelpHZC-uj1CG6M_BPykK4TGn3Dgq8ioHTpv3TW5rKHt3Og_h2T19LiazpLF2_N8OlkkYJROhDagtTSVsXlZWyNyWdZVxR0rncyFzOIxm1nJK2BCcrA2LgJgyqZZXVoxJLd_2LOAYofNFvBYnEQUZxGxcffX2GH42ruuLzZhj3FxV_BcxCeMToX4BQJfYGg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2839579713</pqid></control><display><type>article</type><title>Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Manuel Le Gallo ; Lammie, Corey ; Buechel, Julian ; Carta, Fabio ; Fagbohungbe, Omobayode ; Mackin, Charles ; Tsai, Hsinyu ; Narayanan, Vijay ; Abu, Sebastian ; Kaoutar El Maghraoui ; Rasch, Malte J</creator><creatorcontrib>Manuel Le Gallo ; Lammie, Corey ; Buechel, Julian ; Carta, Fabio ; Fagbohungbe, Omobayode ; Mackin, Charles ; Tsai, Hsinyu ; Narayanan, Vijay ; Abu, Sebastian ; Kaoutar El Maghraoui ; Rasch, Malte J</creatorcontrib><description>Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2307.09357</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Best practice ; Circuits ; Cloud computing ; Computer Science - Emerging Technologies ; Computer Science - Learning ; Digital computers ; Energy consumption ; Hardware ; Inference ; Neural networks ; Training</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1063/5.0168089$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.09357$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Manuel Le Gallo</creatorcontrib><creatorcontrib>Lammie, Corey</creatorcontrib><creatorcontrib>Buechel, Julian</creatorcontrib><creatorcontrib>Carta, Fabio</creatorcontrib><creatorcontrib>Fagbohungbe, Omobayode</creatorcontrib><creatorcontrib>Mackin, Charles</creatorcontrib><creatorcontrib>Tsai, Hsinyu</creatorcontrib><creatorcontrib>Narayanan, Vijay</creatorcontrib><creatorcontrib>Abu, Sebastian</creatorcontrib><creatorcontrib>Kaoutar El Maghraoui</creatorcontrib><creatorcontrib>Rasch, Malte J</creatorcontrib><title>Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference</title><title>arXiv.org</title><description>Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.</description><subject>Artificial neural networks</subject><subject>Best practice</subject><subject>Circuits</subject><subject>Cloud computing</subject><subject>Computer Science - Emerging Technologies</subject><subject>Computer Science - Learning</subject><subject>Digital computers</subject><subject>Energy consumption</subject><subject>Hardware</subject><subject>Inference</subject><subject>Neural networks</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE1PwkAURScmJhLkB7hyEtfF6Xx0OkskKkTQDa6b1-kUi2UGX4vIv3dA8xZ3c3Py7iHkJmVjmSvF7gF_mu8xF0yPmRFKX5ABFyJNcsn5FRl13YYxxjPNlRIDUr53jV_T_sPR-cOSTjy0YU3nPlm6bcAjnQFWB0BHJ9a61iH0TfD0pelpHZC-uj1CG6M_BPykK4TGn3Dgq8ioHTpv3TW5rKHt3Og_h2T19LiazpLF2_N8OlkkYJROhDagtTSVsXlZWyNyWdZVxR0rncyFzOIxm1nJK2BCcrA2LgJgyqZZXVoxJLd_2LOAYofNFvBYnEQUZxGxcffX2GH42ruuLzZhj3FxV_BcxCeMToX4BQJfYGg</recordid><startdate>20240126</startdate><enddate>20240126</enddate><creator>Manuel Le Gallo</creator><creator>Lammie, Corey</creator><creator>Buechel, Julian</creator><creator>Carta, Fabio</creator><creator>Fagbohungbe, Omobayode</creator><creator>Mackin, Charles</creator><creator>Tsai, Hsinyu</creator><creator>Narayanan, Vijay</creator><creator>Abu, Sebastian</creator><creator>Kaoutar El Maghraoui</creator><creator>Rasch, Malte J</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240126</creationdate><title>Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference</title><author>Manuel Le Gallo ; Lammie, Corey ; Buechel, Julian ; Carta, Fabio ; Fagbohungbe, Omobayode ; Mackin, Charles ; Tsai, Hsinyu ; Narayanan, Vijay ; Abu, Sebastian ; Kaoutar El Maghraoui ; Rasch, Malte J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a957-379a7749d9c8bfc9384bfdd2e0be483464640c6c42da0342acc842aa05c16fbc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Best practice</topic><topic>Circuits</topic><topic>Cloud computing</topic><topic>Computer Science - Emerging Technologies</topic><topic>Computer Science - Learning</topic><topic>Digital computers</topic><topic>Energy consumption</topic><topic>Hardware</topic><topic>Inference</topic><topic>Neural networks</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Manuel Le Gallo</creatorcontrib><creatorcontrib>Lammie, Corey</creatorcontrib><creatorcontrib>Buechel, Julian</creatorcontrib><creatorcontrib>Carta, Fabio</creatorcontrib><creatorcontrib>Fagbohungbe, Omobayode</creatorcontrib><creatorcontrib>Mackin, Charles</creatorcontrib><creatorcontrib>Tsai, Hsinyu</creatorcontrib><creatorcontrib>Narayanan, Vijay</creatorcontrib><creatorcontrib>Abu, Sebastian</creatorcontrib><creatorcontrib>Kaoutar El Maghraoui</creatorcontrib><creatorcontrib>Rasch, Malte J</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Manuel Le Gallo</au><au>Lammie, Corey</au><au>Buechel, Julian</au><au>Carta, Fabio</au><au>Fagbohungbe, Omobayode</au><au>Mackin, Charles</au><au>Tsai, Hsinyu</au><au>Narayanan, Vijay</au><au>Abu, Sebastian</au><au>Kaoutar El Maghraoui</au><au>Rasch, Malte J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference</atitle><jtitle>arXiv.org</jtitle><date>2024-01-26</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2307.09357</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-01
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2307_09357
source arXiv.org; Free E- Journals
subjects Artificial neural networks
Best practice
Circuits
Cloud computing
Computer Science - Emerging Technologies
Computer Science - Learning
Digital computers
Energy consumption
Hardware
Inference
Neural networks
Training
title Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T02%3A42%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Using%20the%20IBM%20Analog%20In-Memory%20Hardware%20Acceleration%20Kit%20for%20Neural%20Network%20Training%20and%20Inference&rft.jtitle=arXiv.org&rft.au=Manuel%20Le%20Gallo&rft.date=2024-01-26&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2307.09357&rft_dat=%3Cproquest_arxiv%3E2839579713%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2839579713&rft_id=info:pmid/&rfr_iscdi=true