Robust Tensor Factorization for Color Image and Grayscale Video Recovery

Low-rank tensor completion (LRTC) plays an important role in many fields, such as machine learning, computer vision, image processing, and mathematical theory. Since rank minimization is an NP-hard problem, one strategy is that it is converted into a convex relaxation tensor nuclear norm (TNN) that...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.174410-174423
Hauptverfasser: Du, Shiqiang, Shi, Yuqing, Hu, Wenjin, Wang, Weilan, Lian, Jing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Low-rank tensor completion (LRTC) plays an important role in many fields, such as machine learning, computer vision, image processing, and mathematical theory. Since rank minimization is an NP-hard problem, one strategy is that it is converted into a convex relaxation tensor nuclear norm (TNN) that requires the repeated calculation of time-consuming SVD, and the other is to convert it into some product of two smaller tensors that are easy to fall into the local minimum. In order to overcome the above shortcomings, we propose a robust tensor factorization (RTF) model for solving LRTC. In RTF, the noisy tensor data with missing entries is decomposed into low-rank tensor and noisy tensor, and then the low-rank tensor is equivalently decomposed into t-products (essentially vectors convolution) of two smaller tensors: orthogonal dictionary tensor and low-rank representation tensor. Meanwhile, the TNN of low-rank representation tensor is adopted to characterize the low-rank structure of the tensor data for preserving global information. Then, an effective iterative update algorithm based on the alternating direction method of multipliers (ADMM) is proposed to solve RTF. Finally, numerical experiments on image recovering and video completion tasks show the effectiveness of the proposed RTF model compared with several state-of-the-art tensor completion models.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3024635