LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model
This paper describes our submission to subtask a and b of SemEval-2020 Task 4. For subtask a, we use a ALBERT based model with improved input form to pick out the common sense statement from two statement candidates. For subtask b, we use a multiple choice model enhanced by hint sentence mechanism t...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper describes our submission to subtask a and b of SemEval-2020 Task
4. For subtask a, we use a ALBERT based model with improved input form to pick
out the common sense statement from two statement candidates. For subtask b, we
use a multiple choice model enhanced by hint sentence mechanism to select the
reason from given options about why a statement is against common sense.
Besides, we propose a novel transfer learning strategy between subtasks which
help improve the performance. The accuracy scores of our system are 95.6 / 94.9
on official test set and rank 7$^{th}$ / 2$^{nd}$ on Post-Evaluation
leaderboard. |
---|---|
DOI: | 10.48550/arxiv.2007.02540 |