AGL: a scalable system for industrial-purpose graph machine learning
Machine learning over graphs has been emerging as powerful learning tools for graph data. However, it is challenging for industrial communities to leverage the techniques, such as graph neural networks (GNNs), and solve real-world problems at scale because of inherent data dependency in the graphs....
Gespeichert in:
Veröffentlicht in: | Proceedings of the VLDB Endowment 2020-08, Vol.13 (12), p.3125-3137 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning over graphs has been emerging as powerful learning tools for graph data. However, it is challenging for industrial communities to leverage the techniques, such as graph neural networks (GNNs), and solve real-world problems at scale because of inherent data dependency in the graphs. As such, we cannot simply train a GNN with classic learning systems, for instance, parameter server that assumes data parallelism. Existing systems store the graph data in-memory for fast accesses either in a single machine or graph stores from remote. The major drawbacks are three-fold. First, they cannot scale because of the limitations on the volume of the memories, or the bandwidth between graph stores and workers. Second, they require extra development of graph stores without well exploiting mature infrastructures such as MapReduce that guarantee good system properties. Third, they focus on training but ignore optimizing the performance of inference over graphs, thus makes them an unintegrated system.
In this paper, we design AGL, a scalable and integrated system, with fully-functional training and inference for GNNs. Our system design follows the message passing scheme underlying the computations of GNNs. We design to generate the
K
-hop neighborhood, an information-complete subgraph for each node, as well as do the inference simply by merging values from in-edge neighbors and propagating values to out-edge neighbors via MapReduce. In addition, the
K
-hop neighborhood contains information-complete subgraphs for each node, thus we simply do the training on parameter servers due to data independence. Our system AGL, implemented on mature infrastructures, can finish the training of a 2-layer GNN on a graph with billions of nodes and hundred billions of edges in 14 hours, and complete the inference in 1.2 hours. |
---|---|
ISSN: | 2150-8097 2150-8097 |
DOI: | 10.14778/3415478.3415539 |