Optimization Over Trained Neural Networks: Taking a Relaxing Walk
Besides training, mathematical optimization is also used in deep learning to model and solve formulations over trained neural networks for purposes such as verification, compression, and optimization with learned constraints. However, solving these formulations soon becomes difficult as the network...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Besides training, mathematical optimization is also used in deep learning to
model and solve formulations over trained neural networks for purposes such as
verification, compression, and optimization with learned constraints. However,
solving these formulations soon becomes difficult as the network size grows due
to the weak linear relaxation and dense constraint matrix. We have seen
improvements in recent years with cutting plane algorithms, reformulations, and
an heuristic based on Mixed-Integer Linear Programming (MILP). In this work, we
propose a more scalable heuristic based on exploring global and local linear
relaxations of the neural network model. Our heuristic is competitive with a
state-of-the-art MILP solver and the prior heuristic while producing better
solutions with increases in input, depth, and number of neurons. |
---|---|
DOI: | 10.48550/arxiv.2401.03451 |