Diverse Distractor Generation for Constructing High-Quality Multiple Choice Questions
Distractor generation task aims to generate incorrect options (i.e., distractors) for multiple choice questions from an article.Existing methods for this task often utilize a standard encoder-decoder framework. However, these methods often tend to generate semantically similar distractors, since the...
Gespeichert in:
Veröffentlicht in: | IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2022, Vol.30, p.280-291 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Distractor generation task aims to generate incorrect options (i.e., distractors) for multiple choice questions from an article.Existing methods for this task often utilize a standard encoder-decoder framework. However, these methods often tend to generate semantically similar distractors, since the same article representations are used to generate different distractors. Multiple generated distractors with similar semantics are considered equivalent. Because the correct answer is unique, students can eliminate these distractors even without reading the article. In this paper, we propose a multi-selector generation network (MSG-Net) that generates distractors with rich semantics based on different sentences in an article. MSG-Net adopts a multi-selector mechanism to select multiple different sentences in an article that are useful to generate diverse distractors. Specifically, a question-aware and answer-aware mechanism are introduced to assist in selecting useful key sentences, where each key sentence is coherent with the question and not equivalent to the answer. MSG-Net can generate diverse distractors based on each selected key sentence with different semantics. Extensive experiments on the RACE dataset and Cosmos QA dataset show that the proposed model outperforms the state-of-the-art models in generating diverse distractors. |
---|---|
ISSN: | 2329-9290 2329-9304 |
DOI: | 10.1109/TASLP.2021.3138706 |