Scalarization for Multi-Task and Multi-Domain Learning at Scale
Training a single model on multiple input domains and/or output tasks allows for compressing information from multiple sources into a unified backbone hence improves model efficiency. It also enables potential positive knowledge transfer across tasks/domains, leading to improved accuracy and data-ef...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training a single model on multiple input domains and/or output tasks allows
for compressing information from multiple sources into a unified backbone hence
improves model efficiency. It also enables potential positive knowledge
transfer across tasks/domains, leading to improved accuracy and data-efficient
training. However, optimizing such networks is a challenge, in particular due
to discrepancies between the different tasks or domains: Despite several
hypotheses and solutions proposed over the years, recent work has shown that
uniform scalarization training, i.e., simply minimizing the average of the task
losses, yields on-par performance with more costly SotA optimization methods.
This raises the issue of how well we understand the training dynamics of
multi-task and multi-domain networks. In this work, we first devise a
large-scale unified analysis of multi-domain and multi-task learning to better
understand the dynamics of scalarization across varied task/domain combinations
and model sizes. Following these insights, we then propose to leverage
population-based training to efficiently search for the optimal scalarization
weights when dealing with a large number of tasks or domains. |
---|---|
DOI: | 10.48550/arxiv.2310.08910 |