Max-Linear Regression by Convex Programming

We consider the multivariate max-linear regression problem where the model parameters {\beta }_{1},\dotsc, {\beta }_{k}\in \mathbb {R}^{p} need to be estimated from n independent samples of the (noisy) observations y = \max _{1\leq j \leq k} {\beta }_{j}^{\mathsf {T}} {x} + \mathrm {noise} . T...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information theory 2024-03, Vol.70 (3), p.1897-1912
Hauptverfasser: Kim, Seonho, Bahmani, Sohail, Lee, Kiryung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We consider the multivariate max-linear regression problem where the model parameters {\beta }_{1},\dotsc, {\beta }_{k}\in \mathbb {R}^{p} need to be estimated from n independent samples of the (noisy) observations y = \max _{1\leq j \leq k} {\beta }_{j}^{\mathsf {T}} {x} + \mathrm {noise} . The max-linear model vastly generalizes the conventional linear model, and it can approximate any convex function to an arbitrary accuracy when the number of linear models k is large enough. However, the inherent nonlinearity of the max-linear model renders the estimation of the regression parameters computationally challenging. Particularly, no estimator based on convex programming is known in the literature. We formulate and analyze a scalable convex program given by anchored regression (AR) as the estimator for the max-linear regression problem. Under the standard Gaussian observation setting, we present a non-asymptotic performance guarantee showing that the convex program recovers the parameters with high probability. When the k linear components are equally likely to achieve the maximum, our result shows a sufficient number of noise-free observations for exact recovery scales as k^{4}p up to a logarithmic factor. This sample complexity coincides with that by alternating minimization (Ghosh et al., 2021). Moreover, the same sample complexity applies when the observations are corrupted with arbitrary deterministic noise. We provide empirical results that show that our method performs as our theoretical result predicts, and is competitive with the alternating minimization algorithm particularly in presence of multiplicative Bernoulli noise. Furthermore, we also show empirically that a recursive application of AR can significantly improve the estimation accuracy.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2024.3350518