Variable selection with neural networks
In this paper, we present 3 different neural network-based methods to perform variable selection. OCD — Optimal Cell Damage — is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of Le Cun et al.). Regula...
Gespeichert in:
Veröffentlicht in: | Neurocomputing (Amsterdam) 1996-07, Vol.12 (2-3), p.223-248 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we present 3 different neural network-based methods to perform variable selection. OCD — Optimal Cell Damage — is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of Le Cun et al.). Regularization theory proposes to constrain estimators by adding a term to the cost function used to train a neural network. In the Bayesian framework, this additional term can be interpreted as the log prior to the weights distribution. We propose to use two priors (a Gaussian and a Gaussian mixture) and show that this regularization approach allows to select efficient subsets of variables. Our methods are compared to conventional statistical selection procedures and are shown to significantly improve on that. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/0925-2312(95)00121-2 |