Reprint of: A forward–backward greedy approach for sparse multiscale learning
Multiscale models are known to be successful in uncovering and representing structure in data at different resolutions. We propose here a feature driven Reproducing Kernel Hilbert Space (RKHS) for which the associated kernel has a weighted multiscale structure. For generating approximations in this...
Gespeichert in:
Veröffentlicht in: | Computer methods in applied mechanics and engineering 2022-12, Vol.402, p.115760, Article 115760 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multiscale models are known to be successful in uncovering and representing structure in data at different resolutions. We propose here a feature driven Reproducing Kernel Hilbert Space (RKHS) for which the associated kernel has a weighted multiscale structure. For generating approximations in this space, we provide a practical forward–backward algorithm that is shown to greedily construct a set of basis functions having a multiscale structure which enables sparse efficient representation of the given data and efficient predictions. We provide a detailed analysis of the algorithm including recommendations for selecting algorithmic hyperparameters and estimating probabilistic rates of convergence at individual scales. We also extend this analysis to a multiscale setting, studying the effects of finite scale truncation and quality of solution in the inherent RKHS. In the last section, we analyze the performance of the approach on a variety of simulations and real data sets illustrating the efficiency claims in terms of model quality and data reduction.
•Multiscale approximation using a native RKHS with kernels of varying support.•Intelligently pick relevant scales for efficient data reduction and fast predictions.•A greedy Forward–Backward approach reduces MSE as new basis functions are added.•Analytical and experimental results justify hyperparameter (ϵ0) selection. |
---|---|
ISSN: | 0045-7825 1879-2138 |
DOI: | 10.1016/j.cma.2022.115760 |