AutoGMap: Learning to Map Large-scale Sparse Graphs on Memristive Crossbars

The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., Social Networks, Knowledge Graphs) on traditional computing architectures (CPU, GPU, or TPU). But the exploration of large-scale sparse graph computing on processing-in-memory...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-03
Hauptverfasser: Lyu, Bo, Wang, Shengbo, Wen, Shiping, Shi, Kaibo, Yang, Yin, Zeng, Lingfang, Huang, Tingwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Lyu, Bo
Wang, Shengbo
Wen, Shiping
Shi, Kaibo
Yang, Yin
Zeng, Lingfang
Huang, Tingwen
description The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., Social Networks, Knowledge Graphs) on traditional computing architectures (CPU, GPU, or TPU). But the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption, to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled ''block partition'' schemes are proposed. However, these methods are coarse-grained or static, and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model (LSTM, combined with the dynamic-fill scheme) generates remarkable mapping performance on a small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.
doi_str_mv 10.48550/arxiv.2111.07684
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2111_07684</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2597940821</sourcerecordid><originalsourceid>FETCH-LOGICAL-a954-88fbfabc255d189e0aa1e45d756ce79553daa086eb08cec003b42066bd2f332a3</originalsourceid><addsrcrecordid>eNotj09Pg0AQxTcmJja1H8CTm3gGZ__B4q0h2hppPNg7GWCoNC3gLjT67cXW0yRvXt57P8buBITaGgOP6L6bUyiFECHEkdVXbCaVEoHVUt6whfd7AJBRLI1RM_a2HIdutcH-iWeErm3aHR86Pgk8Q7ejwJd4IP7Ro_PEVw77T8-7lm_o6Bo_NCfiqeu8L6b_Lbuu8eBp8X_nbPvyvE3XQfa-ek2XWYCJ0YG1dVFjUU79lbAJAaIgbarYRCXFybSqQgQbUQG2pBJAFVpCFBWVrJWSqObs_hJ7Js171xzR_eR_xPmZeHI8XBy9675G8kO-70bXTptyaZI40WClUL_Ki1hq</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2597940821</pqid></control><display><type>article</type><title>AutoGMap: Learning to Map Large-scale Sparse Graphs on Memristive Crossbars</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Lyu, Bo ; Wang, Shengbo ; Wen, Shiping ; Shi, Kaibo ; Yang, Yin ; Zeng, Lingfang ; Huang, Tingwen</creator><creatorcontrib>Lyu, Bo ; Wang, Shengbo ; Wen, Shiping ; Shi, Kaibo ; Yang, Yin ; Zeng, Lingfang ; Huang, Tingwen</creatorcontrib><description>The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., Social Networks, Knowledge Graphs) on traditional computing architectures (CPU, GPU, or TPU). But the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption, to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled ''block partition'' schemes are proposed. However, these methods are coarse-grained or static, and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model (LSTM, combined with the dynamic-fill scheme) generates remarkable mapping performance on a small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2111.07684</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computation ; Computer Science - Emerging Technologies ; Computer Science - Learning ; Decision making ; Graphical representations ; Graphs ; Knowledge representation ; Machine learning ; Mapping ; Social networks ; Sparsity</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/TNNLS.2023.3265383$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.07684$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lyu, Bo</creatorcontrib><creatorcontrib>Wang, Shengbo</creatorcontrib><creatorcontrib>Wen, Shiping</creatorcontrib><creatorcontrib>Shi, Kaibo</creatorcontrib><creatorcontrib>Yang, Yin</creatorcontrib><creatorcontrib>Zeng, Lingfang</creatorcontrib><creatorcontrib>Huang, Tingwen</creatorcontrib><title>AutoGMap: Learning to Map Large-scale Sparse Graphs on Memristive Crossbars</title><title>arXiv.org</title><description>The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., Social Networks, Knowledge Graphs) on traditional computing architectures (CPU, GPU, or TPU). But the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption, to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled ''block partition'' schemes are proposed. However, these methods are coarse-grained or static, and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model (LSTM, combined with the dynamic-fill scheme) generates remarkable mapping performance on a small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.</description><subject>Algorithms</subject><subject>Computation</subject><subject>Computer Science - Emerging Technologies</subject><subject>Computer Science - Learning</subject><subject>Decision making</subject><subject>Graphical representations</subject><subject>Graphs</subject><subject>Knowledge representation</subject><subject>Machine learning</subject><subject>Mapping</subject><subject>Social networks</subject><subject>Sparsity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj09Pg0AQxTcmJja1H8CTm3gGZ__B4q0h2hppPNg7GWCoNC3gLjT67cXW0yRvXt57P8buBITaGgOP6L6bUyiFECHEkdVXbCaVEoHVUt6whfd7AJBRLI1RM_a2HIdutcH-iWeErm3aHR86Pgk8Q7ejwJd4IP7Ro_PEVw77T8-7lm_o6Bo_NCfiqeu8L6b_Lbuu8eBp8X_nbPvyvE3XQfa-ek2XWYCJ0YG1dVFjUU79lbAJAaIgbarYRCXFybSqQgQbUQG2pBJAFVpCFBWVrJWSqObs_hJ7Js171xzR_eR_xPmZeHI8XBy9675G8kO-70bXTptyaZI40WClUL_Ki1hq</recordid><startdate>20230303</startdate><enddate>20230303</enddate><creator>Lyu, Bo</creator><creator>Wang, Shengbo</creator><creator>Wen, Shiping</creator><creator>Shi, Kaibo</creator><creator>Yang, Yin</creator><creator>Zeng, Lingfang</creator><creator>Huang, Tingwen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230303</creationdate><title>AutoGMap: Learning to Map Large-scale Sparse Graphs on Memristive Crossbars</title><author>Lyu, Bo ; Wang, Shengbo ; Wen, Shiping ; Shi, Kaibo ; Yang, Yin ; Zeng, Lingfang ; Huang, Tingwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a954-88fbfabc255d189e0aa1e45d756ce79553daa086eb08cec003b42066bd2f332a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Computation</topic><topic>Computer Science - Emerging Technologies</topic><topic>Computer Science - Learning</topic><topic>Decision making</topic><topic>Graphical representations</topic><topic>Graphs</topic><topic>Knowledge representation</topic><topic>Machine learning</topic><topic>Mapping</topic><topic>Social networks</topic><topic>Sparsity</topic><toplevel>online_resources</toplevel><creatorcontrib>Lyu, Bo</creatorcontrib><creatorcontrib>Wang, Shengbo</creatorcontrib><creatorcontrib>Wen, Shiping</creatorcontrib><creatorcontrib>Shi, Kaibo</creatorcontrib><creatorcontrib>Yang, Yin</creatorcontrib><creatorcontrib>Zeng, Lingfang</creatorcontrib><creatorcontrib>Huang, Tingwen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lyu, Bo</au><au>Wang, Shengbo</au><au>Wen, Shiping</au><au>Shi, Kaibo</au><au>Yang, Yin</au><au>Zeng, Lingfang</au><au>Huang, Tingwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AutoGMap: Learning to Map Large-scale Sparse Graphs on Memristive Crossbars</atitle><jtitle>arXiv.org</jtitle><date>2023-03-03</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., Social Networks, Knowledge Graphs) on traditional computing architectures (CPU, GPU, or TPU). But the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption, to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled ''block partition'' schemes are proposed. However, these methods are coarse-grained or static, and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model (LSTM, combined with the dynamic-fill scheme) generates remarkable mapping performance on a small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2111.07684</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-03
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2111_07684
source arXiv.org; Free E- Journals
subjects Algorithms
Computation
Computer Science - Emerging Technologies
Computer Science - Learning
Decision making
Graphical representations
Graphs
Knowledge representation
Machine learning
Mapping
Social networks
Sparsity
title AutoGMap: Learning to Map Large-scale Sparse Graphs on Memristive Crossbars
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T23%3A26%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AutoGMap:%20Learning%20to%20Map%20Large-scale%20Sparse%20Graphs%20on%20Memristive%20Crossbars&rft.jtitle=arXiv.org&rft.au=Lyu,%20Bo&rft.date=2023-03-03&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2111.07684&rft_dat=%3Cproquest_arxiv%3E2597940821%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2597940821&rft_id=info:pmid/&rfr_iscdi=true