Stochastic Data-Driven Bouligand Landweber Method for Solving Non-smooth Inverse Problems
In this study, we present and analyze a novel variant of the stochastic gradient descent method, referred as Stochastic data-driven Bouligand Landweber iteration tailored for addressing the system of non-smooth ill-posed inverse problems. Our method incorporates the utilization of training data, usi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this study, we present and analyze a novel variant of the stochastic
gradient descent method, referred as Stochastic data-driven Bouligand Landweber
iteration tailored for addressing the system of non-smooth ill-posed inverse
problems. Our method incorporates the utilization of training data, using a
bounded linear operator, which guides the iterative procedure. At each
iteration step, the method randomly chooses one equation from the nonlinear
system with data-driven term. When dealing with the precise or exact data, it
has been established that mean square iteration error converges to zero.
However, when confronted with the noisy data, we employ our approach in
conjunction with a predefined stopping criterion, which we refer to as an
\textit{a-priori} stopping rule. We provide a comprehensive theoretical
foundation, establishing convergence and stability for this scheme within the
realm of infinite-dimensional Hilbert spaces. These theoretical underpinnings
are further bolstered by discussing an example that fulfills assumptions of the
paper. |
---|---|
DOI: | 10.48550/arxiv.2402.04772 |