Convex and Bilevel Optimization for Neuro-Symbolic Inference and Learning
We leverage convex and bilevel optimization techniques to develop a general gradient-based parameter learning framework for neural-symbolic (NeSy) systems. We demonstrate our framework with NeuPSL, a state-of-the-art NeSy architecture. To achieve this, we propose a smooth primal and dual formulation...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We leverage convex and bilevel optimization techniques to develop a general
gradient-based parameter learning framework for neural-symbolic (NeSy) systems.
We demonstrate our framework with NeuPSL, a state-of-the-art NeSy architecture.
To achieve this, we propose a smooth primal and dual formulation of NeuPSL
inference and show learning gradients are functions of the optimal dual
variables. Additionally, we develop a dual block coordinate descent algorithm
for the new formulation that naturally exploits warm-starts. This leads to over
100x learning runtime improvements over the current best NeuPSL inference
method. Finally, we provide extensive empirical evaluations across 8 datasets
covering a range of tasks and demonstrate our learning framework achieves up to
a 16% point prediction performance improvement over alternative learning
methods. |
---|---|
DOI: | 10.48550/arxiv.2401.09651 |