Distributed learning for sketched kernel regression

We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2021-11, Vol.143, p.368-376
Hauptverfasser: Lian, Heng, Liu, Jiamin, Fan, Zengyan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2021.06.020