Pruning Edges and Gradients to Learn Hypergraphs from Larger Sets
This paper aims for set-to-hypergraph prediction, where the goal is to infer the set of relations for a given set of entities. This is a common abstraction for applications in particle physics, biological systems, and combinatorial optimization. We address two common scaling problems encountered in...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper aims for set-to-hypergraph prediction, where the goal is to infer
the set of relations for a given set of entities. This is a common abstraction
for applications in particle physics, biological systems, and combinatorial
optimization. We address two common scaling problems encountered in
set-to-hypergraph tasks that limit the size of the input set: the exponentially
growing number of hyperedges and the run-time complexity, both leading to
higher memory requirements. We make three contributions. First, we propose to
predict and supervise the \emph{positive} edges only, which changes the
asymptotic memory scaling from exponential to linear. Second, we introduce a
training method that encourages iterative refinement of the predicted
hypergraph, which allows us to skip iterations in the backward pass for
improved efficiency and constant memory usage. Third, we combine both
contributions in a single set-to-hypergraph model that enables us to address
problems with larger input set sizes. We provide ablations for our main
technical contributions and show that our model outperforms prior
state-of-the-art, especially for larger sets. |
---|---|
DOI: | 10.48550/arxiv.2106.13919 |