Explaining the Effectiveness of Multi-Task Learning for Efficient Knowledge Extraction from Spine MRI Reports
Pretrained Transformer based models finetuned on domain specific corpora have changed the landscape of NLP. However, training or fine-tuning these models for individual tasks can be time consuming and resource intensive. Thus, a lot of current research is focused on using transformers for multi-task...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pretrained Transformer based models finetuned on domain specific corpora have
changed the landscape of NLP. However, training or fine-tuning these models for
individual tasks can be time consuming and resource intensive. Thus, a lot of
current research is focused on using transformers for multi-task learning
(Raffel et al.,2020) and how to group the tasks to help a multi-task model to
learn effective representations that can be shared across tasks (Standley et
al., 2020; Fifty et al., 2021). In this work, we show that a single
multi-tasking model can match the performance of task specific models when the
task specific models show similar representations across all of their hidden
layers and their gradients are aligned, i.e. their gradients follow the same
direction. We hypothesize that the above observations explain the effectiveness
of multi-task learning. We validate our observations on our internal
radiologist-annotated datasets on the cervical and lumbar spine. Our method is
simple and intuitive, and can be used in a wide range of NLP problems. |
---|---|
DOI: | 10.48550/arxiv.2205.02979 |