Stochastic Data-Driven Bouligand Landweber Method for Solving Non-smooth Inverse Problems

In this study, we present and analyze a novel variant of the stochastic gradient descent method, referred as Stochastic data-driven Bouligand Landweber iteration tailored for addressing the system of non-smooth ill-posed inverse problems. Our method incorporates the utilization of training data, usi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: Bajpai, Harshit, Mittal, Gaurav, Giri, Ankik Kumar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this study, we present and analyze a novel variant of the stochastic gradient descent method, referred as Stochastic data-driven Bouligand Landweber iteration tailored for addressing the system of non-smooth ill-posed inverse problems. Our method incorporates the utilization of training data, using a bounded linear operator, which guides the iterative procedure. At each iteration step, the method randomly chooses one equation from the nonlinear system with data-driven term. When dealing with the precise or exact data, it has been established that mean square iteration error converges to zero. However, when confronted with the noisy data, we employ our approach in conjunction with a predefined stopping criterion, which we refer to as an \textit{a-priori} stopping rule. We provide a comprehensive theoretical foundation, establishing convergence and stability for this scheme within the realm of infinite-dimensional Hilbert spaces. These theoretical underpinnings are further bolstered by discussing an example that fulfills assumptions of the paper.
ISSN:2331-8422