Asynchronous Algorithmic Alignment with Cocycles
State-of-the-art neural algorithmic reasoners make use of message passing in graph neural networks (GNNs). But typical GNNs blur the distinction between the definition and invocation of the message function, forcing a node to send messages to its neighbours at every layer, synchronously. When applyi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | State-of-the-art neural algorithmic reasoners make use of message passing in
graph neural networks (GNNs). But typical GNNs blur the distinction between the
definition and invocation of the message function, forcing a node to send
messages to its neighbours at every layer, synchronously. When applying GNNs to
learn to execute dynamic programming algorithms, however, on most steps only a
handful of the nodes would have meaningful updates to send. One, hence, runs
the risk of inefficiencies by sending too much irrelevant data across the
graph. But more importantly, many intermediate GNN steps have to learn the
identity functions, which is a non-trivial learning problem. In this work, we
explicitly separate the concepts of node state update and message function
invocation. With this separation, we obtain a mathematical formulation that
allows us to reason about asynchronous computation in both algorithms and
neural networks. Our analysis yields several practical implementations of
synchronous scalable GNN layers that are provably invariant under various forms
of asynchrony. |
---|---|
DOI: | 10.48550/arxiv.2306.15632 |