Entropy Aware Training for Fast and Accurate Distributed GNN
Several distributed frameworks have been developed to scale Graph Neural Networks (GNNs) on billion-size graphs. On several benchmarks, we observe that the graph partitions generated by these frameworks have heterogeneous data distributions and class imbalance, affecting convergence, and resulting i...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Deshmukh, Dhruv Gupta, Gagan Raj Chawla, Manisha Jatala, Vishwesh Haldar, Anirban |
description | Several distributed frameworks have been developed to scale Graph Neural Networks (GNNs) on billion-size graphs. On several benchmarks, we observe that the graph partitions generated by these frameworks have heterogeneous data distributions and class imbalance, affecting convergence, and resulting in lower performance than centralized implementations. We holistically address these challenges and develop techniques that reduce training time and improve accuracy. We develop an Edge-Weighted partitioning technique to improve the micro average F1 score (accuracy) by minimizing the total entropy. Furthermore, we add an asynchronous personalization phase that adapts each compute-host's model to its local data distribution. We design a class-balanced sampler that considerably speeds up convergence. We implemented our algorithms on the DistDGL framework and observed that our training techniques scale much better than the existing training approach. We achieved a (2-3x) speedup in training time and 4\% improvement on average in micro-F1 scores on 5 large graph benchmarks compared to the standard baselines. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2886748065</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2886748065</sourcerecordid><originalsourceid>FETCH-proquest_journals_28867480653</originalsourceid><addsrcrecordid>eNqNyrEOgjAUQNHGxESi_MNLnElqS6GDC1HQiYmdVCimxLT42sb49zr4AU53OHdFEsb5IZM5YxuSej9TSllRMiF4Qo61DeiWN1QvhRo6VMYae4fJITTKB1B2hGoYIqqg4Wx8QHOLQY9wadsdWU_q4XX665bsm7o7XbMF3TNqH_rZRbRf6pmURZlLWgj-3_UB52M24A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2886748065</pqid></control><display><type>article</type><title>Entropy Aware Training for Fast and Accurate Distributed GNN</title><source>Free E- Journals</source><creator>Deshmukh, Dhruv ; Gupta, Gagan Raj ; Chawla, Manisha ; Jatala, Vishwesh ; Haldar, Anirban</creator><creatorcontrib>Deshmukh, Dhruv ; Gupta, Gagan Raj ; Chawla, Manisha ; Jatala, Vishwesh ; Haldar, Anirban</creatorcontrib><description>Several distributed frameworks have been developed to scale Graph Neural Networks (GNNs) on billion-size graphs. On several benchmarks, we observe that the graph partitions generated by these frameworks have heterogeneous data distributions and class imbalance, affecting convergence, and resulting in lower performance than centralized implementations. We holistically address these challenges and develop techniques that reduce training time and improve accuracy. We develop an Edge-Weighted partitioning technique to improve the micro average F1 score (accuracy) by minimizing the total entropy. Furthermore, we add an asynchronous personalization phase that adapts each compute-host's model to its local data distribution. We design a class-balanced sampler that considerably speeds up convergence. We implemented our algorithms on the DistDGL framework and observed that our training techniques scale much better than the existing training approach. We achieved a (2-3x) speedup in training time and 4\% improvement on average in micro-F1 scores on 5 large graph benchmarks compared to the standard baselines.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Benchmarks ; Convergence ; Entropy ; Graph neural networks ; Partitions (mathematics) ; Training</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Deshmukh, Dhruv</creatorcontrib><creatorcontrib>Gupta, Gagan Raj</creatorcontrib><creatorcontrib>Chawla, Manisha</creatorcontrib><creatorcontrib>Jatala, Vishwesh</creatorcontrib><creatorcontrib>Haldar, Anirban</creatorcontrib><title>Entropy Aware Training for Fast and Accurate Distributed GNN</title><title>arXiv.org</title><description>Several distributed frameworks have been developed to scale Graph Neural Networks (GNNs) on billion-size graphs. On several benchmarks, we observe that the graph partitions generated by these frameworks have heterogeneous data distributions and class imbalance, affecting convergence, and resulting in lower performance than centralized implementations. We holistically address these challenges and develop techniques that reduce training time and improve accuracy. We develop an Edge-Weighted partitioning technique to improve the micro average F1 score (accuracy) by minimizing the total entropy. Furthermore, we add an asynchronous personalization phase that adapts each compute-host's model to its local data distribution. We design a class-balanced sampler that considerably speeds up convergence. We implemented our algorithms on the DistDGL framework and observed that our training techniques scale much better than the existing training approach. We achieved a (2-3x) speedup in training time and 4\% improvement on average in micro-F1 scores on 5 large graph benchmarks compared to the standard baselines.</description><subject>Algorithms</subject><subject>Benchmarks</subject><subject>Convergence</subject><subject>Entropy</subject><subject>Graph neural networks</subject><subject>Partitions (mathematics)</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyrEOgjAUQNHGxESi_MNLnElqS6GDC1HQiYmdVCimxLT42sb49zr4AU53OHdFEsb5IZM5YxuSej9TSllRMiF4Qo61DeiWN1QvhRo6VMYae4fJITTKB1B2hGoYIqqg4Wx8QHOLQY9wadsdWU_q4XX665bsm7o7XbMF3TNqH_rZRbRf6pmURZlLWgj-3_UB52M24A</recordid><startdate>20231104</startdate><enddate>20231104</enddate><creator>Deshmukh, Dhruv</creator><creator>Gupta, Gagan Raj</creator><creator>Chawla, Manisha</creator><creator>Jatala, Vishwesh</creator><creator>Haldar, Anirban</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231104</creationdate><title>Entropy Aware Training for Fast and Accurate Distributed GNN</title><author>Deshmukh, Dhruv ; Gupta, Gagan Raj ; Chawla, Manisha ; Jatala, Vishwesh ; Haldar, Anirban</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28867480653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Benchmarks</topic><topic>Convergence</topic><topic>Entropy</topic><topic>Graph neural networks</topic><topic>Partitions (mathematics)</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Deshmukh, Dhruv</creatorcontrib><creatorcontrib>Gupta, Gagan Raj</creatorcontrib><creatorcontrib>Chawla, Manisha</creatorcontrib><creatorcontrib>Jatala, Vishwesh</creatorcontrib><creatorcontrib>Haldar, Anirban</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deshmukh, Dhruv</au><au>Gupta, Gagan Raj</au><au>Chawla, Manisha</au><au>Jatala, Vishwesh</au><au>Haldar, Anirban</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Entropy Aware Training for Fast and Accurate Distributed GNN</atitle><jtitle>arXiv.org</jtitle><date>2023-11-04</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Several distributed frameworks have been developed to scale Graph Neural Networks (GNNs) on billion-size graphs. On several benchmarks, we observe that the graph partitions generated by these frameworks have heterogeneous data distributions and class imbalance, affecting convergence, and resulting in lower performance than centralized implementations. We holistically address these challenges and develop techniques that reduce training time and improve accuracy. We develop an Edge-Weighted partitioning technique to improve the micro average F1 score (accuracy) by minimizing the total entropy. Furthermore, we add an asynchronous personalization phase that adapts each compute-host's model to its local data distribution. We design a class-balanced sampler that considerably speeds up convergence. We implemented our algorithms on the DistDGL framework and observed that our training techniques scale much better than the existing training approach. We achieved a (2-3x) speedup in training time and 4\% improvement on average in micro-F1 scores on 5 large graph benchmarks compared to the standard baselines.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2886748065 |
source | Free E- Journals |
subjects | Algorithms Benchmarks Convergence Entropy Graph neural networks Partitions (mathematics) Training |
title | Entropy Aware Training for Fast and Accurate Distributed GNN |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T08%3A59%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Entropy%20Aware%20Training%20for%20Fast%20and%20Accurate%20Distributed%20GNN&rft.jtitle=arXiv.org&rft.au=Deshmukh,%20Dhruv&rft.date=2023-11-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2886748065%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2886748065&rft_id=info:pmid/&rfr_iscdi=true |