PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks
In this paper, we observe that the main performance bottleneck of emerging graph neural networks (GNNs) is not the inference algorithms themselves, but their graph data preprocessing. To take such preprocessing off the critical path in GNNs, we propose PreGNN , a novel hardware automation architectu...
Gespeichert in:
Veröffentlicht in: | IEEE computer architecture letters 2022-07, Vol.21 (2), p.117-120 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 120 |
---|---|
container_issue | 2 |
container_start_page | 117 |
container_title | IEEE computer architecture letters |
container_volume | 21 |
creator | Gouk, Donghyun Kang, Seungkwan Kwon, Miryeong Jang, Junhyeok Choi, Hyunkyu Lee, Sangwon Jung, Myoungsoo |
description | In this paper, we observe that the main performance bottleneck of emerging graph neural networks (GNNs) is not the inference algorithms themselves, but their graph data preprocessing. To take such preprocessing off the critical path in GNNs, we propose PreGNN , a novel hardware automation architecture that accelerates all the tasks of GNN preprocessing from the beginning to the end. Specifically, PreGNN accelerates graph generation in parallel, samples neighbor nodes of a given graph, and prepares graph datasets through all hardware. To reduce the long latency of GNN preprocessing over hardware, we also propose simple, efficient combinational logic that can perform radix sort and arrange the data in a self-governing manner. The evaluation results show that PreGNN can shorten the end-to-end latency of GNN inferences by 10.7× while consuming less energy by 3.3×, compared to a GPU-only system. |
doi_str_mv | 10.1109/LCA.2022.3193256 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LCA_2022_3193256</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9837798</ieee_id><sourcerecordid>2729637560</sourcerecordid><originalsourceid>FETCH-LOGICAL-c244t-5e8cf4c6e9dd5ad1cfaf92ef9bca0a4bd2fb511d436b8c3c2dadd8042313418b3</originalsourceid><addsrcrecordid>eNo9kEtPAjEQgBujiYjeTbw08Qz2td1db2SjYEIWDnhuuu1UFnAX2xLiv7cEw2VmMvnmkQ-hR0rGlJLyZV5NxowwNua05CyTV2hAs0yOJJHi-lJn8hbdhbAhREheiAFqlh6mdf2KZ9rbo_aAJ8bADryObd_h2OOV3gJO1N73BkJouy-8cA7HNeDKt7E1eoeXOq5x2-Gp1_s1ruHgU7OGeOz9NtyjG6d3AR7-8xB9vr-tqtlovph-VJP5yDAh4iiDwjhhJJTWZtpS47QrGbiyMZpo0VjmmoxSK7hsCsMNs9raggjGKRe0aPgQPZ_3pk9_DhCi2vQH36WTiuWslDzPJEkUOVPG9yF4cGrv22_tfxUl6mRSJZPqZFL9m0wjT-eRFgAueFnwPE_hD9Jab6s</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2729637560</pqid></control><display><type>article</type><title>PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Gouk, Donghyun ; Kang, Seungkwan ; Kwon, Miryeong ; Jang, Junhyeok ; Choi, Hyunkyu ; Lee, Sangwon ; Jung, Myoungsoo</creator><creatorcontrib>Gouk, Donghyun ; Kang, Seungkwan ; Kwon, Miryeong ; Jang, Junhyeok ; Choi, Hyunkyu ; Lee, Sangwon ; Jung, Myoungsoo</creatorcontrib><description>In this paper, we observe that the main performance bottleneck of emerging graph neural networks (GNNs) is not the inference algorithms themselves, but their graph data preprocessing. To take such preprocessing off the critical path in GNNs, we propose PreGNN , a novel hardware automation architecture that accelerates all the tasks of GNN preprocessing from the beginning to the end. Specifically, PreGNN accelerates graph generation in parallel, samples neighbor nodes of a given graph, and prepares graph datasets through all hardware. To reduce the long latency of GNN preprocessing over hardware, we also propose simple, efficient combinational logic that can perform radix sort and arrange the data in a self-governing manner. The evaluation results show that PreGNN can shorten the end-to-end latency of GNN inferences by 10.7× while consuming less energy by 3.3×, compared to a GPU-only system.</description><identifier>ISSN: 1556-6056</identifier><identifier>EISSN: 1556-6064</identifier><identifier>DOI: 10.1109/LCA.2022.3193256</identifier><identifier>CODEN: ICALC3</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Arrays ; Automation ; Data preprocessing ; GNN preprocessing ; Graph neural network ; Graph neural networks ; Hardware ; hardware accelerator ; Inference algorithms ; Logic gates ; Network latency ; Preprocessing ; Sorting ; Task analysis</subject><ispartof>IEEE computer architecture letters, 2022-07, Vol.21 (2), p.117-120</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c244t-5e8cf4c6e9dd5ad1cfaf92ef9bca0a4bd2fb511d436b8c3c2dadd8042313418b3</cites><orcidid>0000-0002-9832-5801 ; 0000-0003-4153-7026 ; 0000-0002-4229-0101 ; 0000-0002-0313-1319 ; 0000-0003-3053-396X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9837798$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9837798$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Gouk, Donghyun</creatorcontrib><creatorcontrib>Kang, Seungkwan</creatorcontrib><creatorcontrib>Kwon, Miryeong</creatorcontrib><creatorcontrib>Jang, Junhyeok</creatorcontrib><creatorcontrib>Choi, Hyunkyu</creatorcontrib><creatorcontrib>Lee, Sangwon</creatorcontrib><creatorcontrib>Jung, Myoungsoo</creatorcontrib><title>PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks</title><title>IEEE computer architecture letters</title><addtitle>LCA</addtitle><description>In this paper, we observe that the main performance bottleneck of emerging graph neural networks (GNNs) is not the inference algorithms themselves, but their graph data preprocessing. To take such preprocessing off the critical path in GNNs, we propose PreGNN , a novel hardware automation architecture that accelerates all the tasks of GNN preprocessing from the beginning to the end. Specifically, PreGNN accelerates graph generation in parallel, samples neighbor nodes of a given graph, and prepares graph datasets through all hardware. To reduce the long latency of GNN preprocessing over hardware, we also propose simple, efficient combinational logic that can perform radix sort and arrange the data in a self-governing manner. The evaluation results show that PreGNN can shorten the end-to-end latency of GNN inferences by 10.7× while consuming less energy by 3.3×, compared to a GPU-only system.</description><subject>Algorithms</subject><subject>Arrays</subject><subject>Automation</subject><subject>Data preprocessing</subject><subject>GNN preprocessing</subject><subject>Graph neural network</subject><subject>Graph neural networks</subject><subject>Hardware</subject><subject>hardware accelerator</subject><subject>Inference algorithms</subject><subject>Logic gates</subject><subject>Network latency</subject><subject>Preprocessing</subject><subject>Sorting</subject><subject>Task analysis</subject><issn>1556-6056</issn><issn>1556-6064</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kEtPAjEQgBujiYjeTbw08Qz2td1db2SjYEIWDnhuuu1UFnAX2xLiv7cEw2VmMvnmkQ-hR0rGlJLyZV5NxowwNua05CyTV2hAs0yOJJHi-lJn8hbdhbAhREheiAFqlh6mdf2KZ9rbo_aAJ8bADryObd_h2OOV3gJO1N73BkJouy-8cA7HNeDKt7E1eoeXOq5x2-Gp1_s1ruHgU7OGeOz9NtyjG6d3AR7-8xB9vr-tqtlovph-VJP5yDAh4iiDwjhhJJTWZtpS47QrGbiyMZpo0VjmmoxSK7hsCsMNs9raggjGKRe0aPgQPZ_3pk9_DhCi2vQH36WTiuWslDzPJEkUOVPG9yF4cGrv22_tfxUl6mRSJZPqZFL9m0wjT-eRFgAueFnwPE_hD9Jab6s</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Gouk, Donghyun</creator><creator>Kang, Seungkwan</creator><creator>Kwon, Miryeong</creator><creator>Jang, Junhyeok</creator><creator>Choi, Hyunkyu</creator><creator>Lee, Sangwon</creator><creator>Jung, Myoungsoo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-9832-5801</orcidid><orcidid>https://orcid.org/0000-0003-4153-7026</orcidid><orcidid>https://orcid.org/0000-0002-4229-0101</orcidid><orcidid>https://orcid.org/0000-0002-0313-1319</orcidid><orcidid>https://orcid.org/0000-0003-3053-396X</orcidid></search><sort><creationdate>20220701</creationdate><title>PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks</title><author>Gouk, Donghyun ; Kang, Seungkwan ; Kwon, Miryeong ; Jang, Junhyeok ; Choi, Hyunkyu ; Lee, Sangwon ; Jung, Myoungsoo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c244t-5e8cf4c6e9dd5ad1cfaf92ef9bca0a4bd2fb511d436b8c3c2dadd8042313418b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Arrays</topic><topic>Automation</topic><topic>Data preprocessing</topic><topic>GNN preprocessing</topic><topic>Graph neural network</topic><topic>Graph neural networks</topic><topic>Hardware</topic><topic>hardware accelerator</topic><topic>Inference algorithms</topic><topic>Logic gates</topic><topic>Network latency</topic><topic>Preprocessing</topic><topic>Sorting</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gouk, Donghyun</creatorcontrib><creatorcontrib>Kang, Seungkwan</creatorcontrib><creatorcontrib>Kwon, Miryeong</creatorcontrib><creatorcontrib>Jang, Junhyeok</creatorcontrib><creatorcontrib>Choi, Hyunkyu</creatorcontrib><creatorcontrib>Lee, Sangwon</creatorcontrib><creatorcontrib>Jung, Myoungsoo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE computer architecture letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gouk, Donghyun</au><au>Kang, Seungkwan</au><au>Kwon, Miryeong</au><au>Jang, Junhyeok</au><au>Choi, Hyunkyu</au><au>Lee, Sangwon</au><au>Jung, Myoungsoo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks</atitle><jtitle>IEEE computer architecture letters</jtitle><stitle>LCA</stitle><date>2022-07-01</date><risdate>2022</risdate><volume>21</volume><issue>2</issue><spage>117</spage><epage>120</epage><pages>117-120</pages><issn>1556-6056</issn><eissn>1556-6064</eissn><coden>ICALC3</coden><abstract>In this paper, we observe that the main performance bottleneck of emerging graph neural networks (GNNs) is not the inference algorithms themselves, but their graph data preprocessing. To take such preprocessing off the critical path in GNNs, we propose PreGNN , a novel hardware automation architecture that accelerates all the tasks of GNN preprocessing from the beginning to the end. Specifically, PreGNN accelerates graph generation in parallel, samples neighbor nodes of a given graph, and prepares graph datasets through all hardware. To reduce the long latency of GNN preprocessing over hardware, we also propose simple, efficient combinational logic that can perform radix sort and arrange the data in a self-governing manner. The evaluation results show that PreGNN can shorten the end-to-end latency of GNN inferences by 10.7× while consuming less energy by 3.3×, compared to a GPU-only system.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/LCA.2022.3193256</doi><tpages>4</tpages><orcidid>https://orcid.org/0000-0002-9832-5801</orcidid><orcidid>https://orcid.org/0000-0003-4153-7026</orcidid><orcidid>https://orcid.org/0000-0002-4229-0101</orcidid><orcidid>https://orcid.org/0000-0002-0313-1319</orcidid><orcidid>https://orcid.org/0000-0003-3053-396X</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1556-6056 |
ispartof | IEEE computer architecture letters, 2022-07, Vol.21 (2), p.117-120 |
issn | 1556-6056 1556-6064 |
language | eng |
recordid | cdi_crossref_primary_10_1109_LCA_2022_3193256 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Arrays Automation Data preprocessing GNN preprocessing Graph neural network Graph neural networks Hardware hardware accelerator Inference algorithms Logic gates Network latency Preprocessing Sorting Task analysis |
title | PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A00%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PreGNN:%20Hardware%20Acceleration%20to%20Take%20Preprocessing%20Off%20the%20Critical%20Path%20in%20Graph%20Neural%20Networks&rft.jtitle=IEEE%20computer%20architecture%20letters&rft.au=Gouk,%20Donghyun&rft.date=2022-07-01&rft.volume=21&rft.issue=2&rft.spage=117&rft.epage=120&rft.pages=117-120&rft.issn=1556-6056&rft.eissn=1556-6064&rft.coden=ICALC3&rft_id=info:doi/10.1109/LCA.2022.3193256&rft_dat=%3Cproquest_RIE%3E2729637560%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2729637560&rft_id=info:pmid/&rft_ieee_id=9837798&rfr_iscdi=true |