Solving Linear Regression with Insensitive Loss by Boosting

Following the formulation of Support Vector Regression (SVR), we consider a regression analogue of soft margin optimization over the feature space indexed by a hypothesis class H. More specifically, the problem is to find a linear model w ∈ ℝH that minimizes the sum of ρ-insensitive losses over all...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEICE Transactions on Information and Systems 2024/03/01, Vol.E107.D(3), pp.294-300
Hauptverfasser: MITSUBOSHI, Ryotaro, HATANO, Kohei, TAKIMOTO, Eiji
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Following the formulation of Support Vector Regression (SVR), we consider a regression analogue of soft margin optimization over the feature space indexed by a hypothesis class H. More specifically, the problem is to find a linear model w ∈ ℝH that minimizes the sum of ρ-insensitive losses over all training data for as small ρ as posssible, where the ρ-insensitive loss for a single data (xi, yi) is defined as max{|yi - ∑h whh(xi)| - ρ, 0}. Intuitively, the parameter ρ and the ρ-insensitive loss are defined analogously to the target margin and the hinge loss in soft margin optimization, respectively. The difference of our formulation from SVR is two-fold: (1) we consider L1-norm regularization instead of L2-norm regularization, and (2) the feature space is implicitly defined by a hypothesis class instead of a kernel. We propose a boosting-type algorithm for solving the problem with a theoretically guaranteed convergence rate under a natural assumption on the weak learnability.
ISSN:0916-8532
1745-1361
DOI:10.1587/transinf.2023FCP0004