Task Addition in Multi-Task Learning by Geometrical Alignment
Training deep learning models on limited data while maintaining generalization is one of the fundamental challenges in molecular property prediction. One effective solution is transferring knowledge extracted from abundant datasets to those with scarce data. Recently, a novel algorithm called Geomet...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training deep learning models on limited data while maintaining
generalization is one of the fundamental challenges in molecular property
prediction. One effective solution is transferring knowledge extracted from
abundant datasets to those with scarce data. Recently, a novel algorithm called
Geometrically Aligned Transfer Encoder (GATE) has been introduced, which uses
soft parameter sharing by aligning the geometrical shapes of task-specific
latent spaces. However, GATE faces limitations in scaling to multiple tasks due
to computational costs. In this study, we propose a task addition approach for
GATE to improve performance on target tasks with limited data while minimizing
computational complexity. It is achieved through supervised multi-task
pre-training on a large dataset, followed by the addition and training of
task-specific modules for each target task. Our experiments demonstrate the
superior performance of the task addition strategy for GATE over conventional
multi-task methods, with comparable computational costs. |
---|---|
DOI: | 10.48550/arxiv.2409.16645 |