Enhancing Distractor Generation for Multiple-Choice Questions with Retrieval Augmented Pretraining and Knowledge Graph Integration

In this paper, we tackle the task of distractor generation (DG) for multiple-choice questions. Our study introduces two key designs. First, we propose \textit{retrieval augmented pretraining}, which involves refining the language model pretraining to align it more closely with the downstream task of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Han-Cheng, Yu, Yu-An, Shih, Kin-Man Law, Kai-Yu, Hsieh, Yu-Chen, Cheng, Ho, Hsin-Chih, Lin, Zih-An, Hsu, Wen-Chuan, Yao-Chung, Fan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Han-Cheng, Yu
Yu-An, Shih
Kin-Man Law
Kai-Yu, Hsieh
Yu-Chen, Cheng
Ho, Hsin-Chih
Lin, Zih-An
Hsu, Wen-Chuan
Yao-Chung, Fan
description In this paper, we tackle the task of distractor generation (DG) for multiple-choice questions. Our study introduces two key designs. First, we propose \textit{retrieval augmented pretraining}, which involves refining the language model pretraining to align it more closely with the downstream task of DG. Second, we explore the integration of knowledge graphs to enhance the performance of DG. Through experiments with benchmarking datasets, we show that our models significantly outperform the state-of-the-art results. Our best-performing model advances the F1@3 score from 14.80 to 16.47 in MCQ dataset and from 15.92 to 16.50 in Sciq dataset.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3070857548</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3070857548</sourcerecordid><originalsourceid>FETCH-proquest_journals_30708575483</originalsourceid><addsrcrecordid>eNqNjc2qwjAUhINwQfH6DgdcF2LS2m7Ff0S4insJ7bGNxJOapLr3yW3RB7irYeYbZnpsIKScRFksRJ-NvL9yzsU0FUkiB-y1pEpRrqmEhfbBqTxYB2skdCpoS3Bp7b4xQdcGo3lldY5waNB30MNThwqOGJzGhzIwa8obUsAC_lwbKk3dsKICdmSfBosSYe1UXcG2bZWfi1_2c1HG4-irQzZeLU_zTVQ7e--ezlfbOGrRWfKUZ0maxJn8X-sNWrBSYQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3070857548</pqid></control><display><type>article</type><title>Enhancing Distractor Generation for Multiple-Choice Questions with Retrieval Augmented Pretraining and Knowledge Graph Integration</title><source>Free E- Journals</source><creator>Han-Cheng, Yu ; Yu-An, Shih ; Kin-Man Law ; Kai-Yu, Hsieh ; Yu-Chen, Cheng ; Ho, Hsin-Chih ; Lin, Zih-An ; Hsu, Wen-Chuan ; Yao-Chung, Fan</creator><creatorcontrib>Han-Cheng, Yu ; Yu-An, Shih ; Kin-Man Law ; Kai-Yu, Hsieh ; Yu-Chen, Cheng ; Ho, Hsin-Chih ; Lin, Zih-An ; Hsu, Wen-Chuan ; Yao-Chung, Fan</creatorcontrib><description>In this paper, we tackle the task of distractor generation (DG) for multiple-choice questions. Our study introduces two key designs. First, we propose \textit{retrieval augmented pretraining}, which involves refining the language model pretraining to align it more closely with the downstream task of DG. Second, we explore the integration of knowledge graphs to enhance the performance of DG. Through experiments with benchmarking datasets, we show that our models significantly outperform the state-of-the-art results. Our best-performing model advances the F1@3 score from 14.80 to 16.47 in MCQ dataset and from 15.92 to 16.50 in Sciq dataset.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Knowledge representation ; Questions ; Retrieval</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Han-Cheng, Yu</creatorcontrib><creatorcontrib>Yu-An, Shih</creatorcontrib><creatorcontrib>Kin-Man Law</creatorcontrib><creatorcontrib>Kai-Yu, Hsieh</creatorcontrib><creatorcontrib>Yu-Chen, Cheng</creatorcontrib><creatorcontrib>Ho, Hsin-Chih</creatorcontrib><creatorcontrib>Lin, Zih-An</creatorcontrib><creatorcontrib>Hsu, Wen-Chuan</creatorcontrib><creatorcontrib>Yao-Chung, Fan</creatorcontrib><title>Enhancing Distractor Generation for Multiple-Choice Questions with Retrieval Augmented Pretraining and Knowledge Graph Integration</title><title>arXiv.org</title><description>In this paper, we tackle the task of distractor generation (DG) for multiple-choice questions. Our study introduces two key designs. First, we propose \textit{retrieval augmented pretraining}, which involves refining the language model pretraining to align it more closely with the downstream task of DG. Second, we explore the integration of knowledge graphs to enhance the performance of DG. Through experiments with benchmarking datasets, we show that our models significantly outperform the state-of-the-art results. Our best-performing model advances the F1@3 score from 14.80 to 16.47 in MCQ dataset and from 15.92 to 16.50 in Sciq dataset.</description><subject>Datasets</subject><subject>Knowledge representation</subject><subject>Questions</subject><subject>Retrieval</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjc2qwjAUhINwQfH6DgdcF2LS2m7Ff0S4insJ7bGNxJOapLr3yW3RB7irYeYbZnpsIKScRFksRJ-NvL9yzsU0FUkiB-y1pEpRrqmEhfbBqTxYB2skdCpoS3Bp7b4xQdcGo3lldY5waNB30MNThwqOGJzGhzIwa8obUsAC_lwbKk3dsKICdmSfBosSYe1UXcG2bZWfi1_2c1HG4-irQzZeLU_zTVQ7e--ezlfbOGrRWfKUZ0maxJn8X-sNWrBSYQ</recordid><startdate>20240619</startdate><enddate>20240619</enddate><creator>Han-Cheng, Yu</creator><creator>Yu-An, Shih</creator><creator>Kin-Man Law</creator><creator>Kai-Yu, Hsieh</creator><creator>Yu-Chen, Cheng</creator><creator>Ho, Hsin-Chih</creator><creator>Lin, Zih-An</creator><creator>Hsu, Wen-Chuan</creator><creator>Yao-Chung, Fan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240619</creationdate><title>Enhancing Distractor Generation for Multiple-Choice Questions with Retrieval Augmented Pretraining and Knowledge Graph Integration</title><author>Han-Cheng, Yu ; Yu-An, Shih ; Kin-Man Law ; Kai-Yu, Hsieh ; Yu-Chen, Cheng ; Ho, Hsin-Chih ; Lin, Zih-An ; Hsu, Wen-Chuan ; Yao-Chung, Fan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30708575483</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Knowledge representation</topic><topic>Questions</topic><topic>Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Han-Cheng, Yu</creatorcontrib><creatorcontrib>Yu-An, Shih</creatorcontrib><creatorcontrib>Kin-Man Law</creatorcontrib><creatorcontrib>Kai-Yu, Hsieh</creatorcontrib><creatorcontrib>Yu-Chen, Cheng</creatorcontrib><creatorcontrib>Ho, Hsin-Chih</creatorcontrib><creatorcontrib>Lin, Zih-An</creatorcontrib><creatorcontrib>Hsu, Wen-Chuan</creatorcontrib><creatorcontrib>Yao-Chung, Fan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Han-Cheng, Yu</au><au>Yu-An, Shih</au><au>Kin-Man Law</au><au>Kai-Yu, Hsieh</au><au>Yu-Chen, Cheng</au><au>Ho, Hsin-Chih</au><au>Lin, Zih-An</au><au>Hsu, Wen-Chuan</au><au>Yao-Chung, Fan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Enhancing Distractor Generation for Multiple-Choice Questions with Retrieval Augmented Pretraining and Knowledge Graph Integration</atitle><jtitle>arXiv.org</jtitle><date>2024-06-19</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In this paper, we tackle the task of distractor generation (DG) for multiple-choice questions. Our study introduces two key designs. First, we propose \textit{retrieval augmented pretraining}, which involves refining the language model pretraining to align it more closely with the downstream task of DG. Second, we explore the integration of knowledge graphs to enhance the performance of DG. Through experiments with benchmarking datasets, we show that our models significantly outperform the state-of-the-art results. Our best-performing model advances the F1@3 score from 14.80 to 16.47 in MCQ dataset and from 15.92 to 16.50 in Sciq dataset.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3070857548
source Free E- Journals
subjects Datasets
Knowledge representation
Questions
Retrieval
title Enhancing Distractor Generation for Multiple-Choice Questions with Retrieval Augmented Pretraining and Knowledge Graph Integration
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T21%3A25%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Enhancing%20Distractor%20Generation%20for%20Multiple-Choice%20Questions%20with%20Retrieval%20Augmented%20Pretraining%20and%20Knowledge%20Graph%20Integration&rft.jtitle=arXiv.org&rft.au=Han-Cheng,%20Yu&rft.date=2024-06-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3070857548%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3070857548&rft_id=info:pmid/&rfr_iscdi=true