Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream task...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have
quickly become the state of the art, bypassing previous deep and shallow
learning methods by a large margin. More recently, pre-trained models from
large related datasets have been able to perform well on many downstream tasks
by just fine-tuning on domain-specific datasets . However, using powerful
models on non-trivial tasks, such as ranking and large document classification,
still remains a challenge due to input size limitations of parallel
architecture and extremely small datasets (insufficient for fine-tuning). In
this work, we introduce an end-to-end system, trained in a multi-task setting,
to filter and re-rank answers in the medical domain. We use task-specific
pre-trained models as deep feature extractors. Our model achieves the highest
Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on
the ACL-BioNLP workshop MediQA Question Answering shared-task. |
---|---|
DOI: | 10.48550/arxiv.1907.01643 |