Exploiting Multitask Learning Schemes Using Private Subnetworks

Many problems in pattern recognition are focused to learn one main task, SingleTaskLearning (STL). However, most of them can be formulated from learning several tasks related to the main task at the same time while using a shared representation, MultitaskLearning (MTL). In this paper, a new MLT arch...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: García-Laencina, Pedro J., Figueiras-Vidal, Aníbal R., Serrano-García, Jesús, Sancho-Gómez, José-Luis
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Many problems in pattern recognition are focused to learn one main task, SingleTaskLearning (STL). However, most of them can be formulated from learning several tasks related to the main task at the same time while using a shared representation, MultitaskLearning (MTL). In this paper, a new MLT architecture is proposed and its performance is compared with those obtained from other previous schemes used in MTL. This new MTL scheme makes use of private subnetworks to induce a bias in the learning process. The results provided from artificial and real data sets show how the use of this private subnetworks in MTL produces a better generalization capabilities and a faster learning.
ISSN:0302-9743
1611-3349
DOI:10.1007/11494669_29