On the Expressiveness and Generalization of Hypergraph Neural Networks
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This extended abstract describes a framework for analyzing the
expressiveness, learning, and (structural) generalization of hypergraph neural
networks (HyperGNNs). Specifically, we focus on how HyperGNNs can learn from
finite datasets and generalize structurally to graph reasoning problems of
arbitrary input sizes. Our first contribution is a fine-grained analysis of the
expressiveness of HyperGNNs, that is, the set of functions that they can
realize. Our result is a hierarchy of problems they can solve, defined in terms
of various hyperparameters such as depths and edge arities. Next, we analyze
the learning properties of these neural networks, especially focusing on how
they can be trained on a finite set of small graphs and generalize to larger
graphs, which we term structural generalization. Our theoretical results are
further supported by the empirical results. |
---|---|
DOI: | 10.48550/arxiv.2303.05490 |