A survey on kernel-based multi-task learning
Multi-Task Learning (MTL) seeks to leverage the learning process of several tasks by solving them simultaneously to arrive at better models. This advantage is obtained by coupling the tasks together so that paths to share information among them are created. While Deep learning models have successful...
Gespeichert in:
Veröffentlicht in: | Neurocomputing (Amsterdam) 2024-04, Vol.577, p.127255, Article 127255 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-Task Learning (MTL) seeks to leverage the learning process of several tasks by solving them simultaneously to arrive at better models. This advantage is obtained by coupling the tasks together so that paths to share information among them are created. While Deep learning models have successfully been applied to MTL in different fields, the performance of deep approaches often depends on using large amounts of data to fit complex models with many parameters, something which may not be always feasible or, simply, they may lack some advantages that other approaches have. Kernel methods, such as Support Vector Machines or Gaussian Processes, offer characteristics such as a better generalization ability or the availability of uncertainty estimations, that may make them more suitable for small to medium size datasets. As a consequence, kernel-based MTL methods stand out among these alternative approaches to deep models and there also exists a rich literature on them. In this paper we review these kernel-based multi-task approaches, group them according to a taxonomy we propose, link some of them to foundational work on machine learning, and comment on datasets commonly used in their study and on relevant applications that use them.
•We review the literature of kernel-based methods for Multi-Task Learning.•We propose a taxonomy for MTL kernel-based approaches, including the more recent work.•We propose three main groups: feature-based, combination-based, regularization-based.•We review the Learning to Learn and Learning Using Privileged Information paradigms.•We describe MTL problems used in research as well as some real world applications. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2024.127255 |