Multilinear multitask learning by transformed tensor singular value decomposition

In this paper, we study the problem of multilinear multitask learning (MLMTL), in which all tasks are stacked into a third-order tensor for consideration. In contrast to conventional multitask learning, MLMTL can explore inherent correlations among multiple tasks in a better manner by utilizing mult...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning with applications 2023-09, Vol.13, p.100479, Article 100479
Hauptverfasser: Zhang, Xiongjun, Wu, Jin, Ng, Michael K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we study the problem of multilinear multitask learning (MLMTL), in which all tasks are stacked into a third-order tensor for consideration. In contrast to conventional multitask learning, MLMTL can explore inherent correlations among multiple tasks in a better manner by utilizing multilinear low rank structure. Existing approaches about MLMTL are mainly based on the sum of singular values for approximating low rank matrices obtained by matricizing the third-order tensor. However, these methods are suboptimal in the Tucker rank approximation. In order to elucidate intrinsic correlations among multiple tasks, we present a new approach by the use of transformed tensor nuclear norm (TTNN) constraint in the objective function. The main advantage of the proposed approach is that it can acquire a low transformed multi-rank structure in a transformed tensor by applying suitable unitary transformations which is helpful to determine principal components in grouping multiple tasks for describing their intrinsic correlations more precisely. Furthermore, we establish an excess risk bound of the minimizer of the proposed TTNN approach. Experimental results including synthetic problems and real-world images, show that the mean-square errors of the proposed method is lower than those of the existing methods for different number of tasks and training samples in MLMTL.
ISSN:2666-8270
2666-8270
DOI:10.1016/j.mlwa.2023.100479