SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning

Contrary to genetic programming, the neural network approach to symbolic regression can efficiently handle high-dimensional inputs and leverage gradient methods for faster equation searching. Common ways of constraining expression complexity often involve multistage pruning with fine-tuning, which c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-08
Hauptverfasser: Ho Fung Tsoi, Loncar, Vladimir, Dasu, Sridhara, Harris, Philip
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Contrary to genetic programming, the neural network approach to symbolic regression can efficiently handle high-dimensional inputs and leverage gradient methods for faster equation searching. Common ways of constraining expression complexity often involve multistage pruning with fine-tuning, which can result in significant performance loss. In this work, we propose \(\tt{SymbolNet}\), a neural network approach to symbolic regression in a novel framework that allows dynamic pruning of model weights, input features, and mathematical operators in a single training process, where both training loss and expression complexity are optimized simultaneously. We introduce a sparsity regularization term for each pruning type, which can adaptively adjust its strength, leading to convergence at a target sparsity ratio. Unlike most existing symbolic regression methods that struggle with datasets containing more than \(\mathcal{O}(10)\) inputs, we demonstrate the effectiveness of our model on the LHC jet tagging task (16 inputs), MNIST (784 inputs), and SVHN (3072 inputs). Our approach enables symbolic regression to achieve fast inference with nanosecond-scale latency on FPGAs for high-dimensional datasets in environments with stringent computational resource constraints, such as the high-energy physics experiments at the LHC.
ISSN:2331-8422