Finite Sample Analysis Of Dynamic Regression Parameter Learning
NeurIPS 2022 We consider the dynamic linear regression problem, where the predictor vector may vary with time. This problem can be modeled as a linear dynamical system, with non-constant observation operator, where the parameters that need to be learned are the variance of both the process noise and...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | NeurIPS 2022 We consider the dynamic linear regression problem, where the predictor vector
may vary with time. This problem can be modeled as a linear dynamical system,
with non-constant observation operator, where the parameters that need to be
learned are the variance of both the process noise and the observation noise.
While variance estimation for dynamic regression is a natural problem, with a
variety of applications, existing approaches to this problem either lack
guarantees altogether, or only have asymptotic guarantees without explicit
rates. In particular, existing literature does not provide any clues to the
following fundamental question: In terms of data characteristics, what does the
convergence rate depend on? In this paper we study the global system operator
-- the operator that maps the noise vectors to the output. We obtain estimates
on its spectrum, and as a result derive the first known variance estimators
with finite sample complexity guarantees. The proposed bounds depend on the
shape of a certain spectrum related to the system operator, and thus provide
the first known explicit geometric parameter of the data that can be used to
bound estimation errors. In addition, the results hold for arbitrary sub
Gaussian distributions of noise terms. We evaluate the approach on synthetic
and real-world benchmarks. |
---|---|
DOI: | 10.48550/arxiv.1906.05591 |