Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra
Neural networks are increasingly being applied to problems in acoustics and audio signal processing. Large audio datasets are being generated for use in training machine learning algorithms, and the reduction of training times is of increasing relevance. The work presented here begins by reformulati...
Gespeichert in:
Veröffentlicht in: | The Journal of the Acoustical Society of America 2021-06, Vol.149 (6), p.4119-4133 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural networks are increasingly being applied to problems in acoustics and audio signal processing. Large audio datasets are being generated for use in training machine learning algorithms, and the reduction of training times is of increasing relevance. The work presented here begins by reformulating the analysis of the classical multilayer perceptron to show the explicit dependence of network parameters on the properties of the weight matrices in the network. This analysis then allows the application of the singular value decomposition (SVD) to the weight matrices. An algorithm is presented that makes use of regular applications of the SVD to progressively reduce the dimensionality of the network. This results in significant reductions in network training times of up to 50% with very little or no loss in accuracy. The use of the algorithm is demonstrated by applying it to a number of acoustical classification problems that help quantify the extent to which closely related spectra can be distinguished by machine learning. |
---|---|
ISSN: | 0001-4966 1520-8524 |
DOI: | 10.1121/10.0005126 |