Multi-Task Networks With Universe, Group, and Task Feature Learning
We present methods for multi-task learning that take advantage of natural groupings of related tasks. Task groups may be defined along known properties of the tasks, such as task domain or language. Such task groups represent supervised information at the inter-task level and can be encoded into the...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present methods for multi-task learning that take advantage of natural
groupings of related tasks. Task groups may be defined along known properties
of the tasks, such as task domain or language. Such task groups represent
supervised information at the inter-task level and can be encoded into the
model. We investigate two variants of neural network architectures that
accomplish this, learning different feature spaces at the levels of individual
tasks, task groups, as well as the universe of all tasks: (1) parallel
architectures encode each input simultaneously into feature spaces at different
levels; (2) serial architectures encode each input successively into feature
spaces at different levels in the task hierarchy. We demonstrate the methods on
natural language understanding (NLU) tasks, where a grouping of tasks into
different task domains leads to improved performance on ATIS, Snips, and a
large inhouse dataset. |
---|---|
DOI: | 10.48550/arxiv.1907.01791 |