A Comparison of Google and ChatGPT for Automatic Generation of Health-related Multiple-choice Questions
Critical to producing accessible content is an understanding of what characteristics affect understanding and comprehension. To answer this question, we are producing a large corpus of health-related texts with associated questions that can be read or listened to by study participants to measure the...
Gespeichert in:
Veröffentlicht in: | AMIA Summits on Translational Science proceedings 2024, Vol.2024, p.679 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | 679 |
container_title | AMIA Summits on Translational Science proceedings |
container_volume | 2024 |
creator | Song, Vivien Kauchak, David Hamre, John Morgenstein, Nick Leroy, Gondy |
description | Critical to producing accessible content is an understanding of what characteristics affect understanding and comprehension. To answer this question, we are producing a large corpus of health-related texts with associated questions that can be read or listened to by study participants to measure the difficulty of the underlying content, which can later be used to better understand text difficulty and user comprehension. In this paper, we examine methods for automatically generating multiple-choice questions using Google's related questions and ChatGPT. Overall, we find both algorithms generate reasonable questions that are complementary; ChatGPT questions are more similar to the snippet while Google related-search questions have more lexical variation. |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_miscellaneous_3064139713</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3064139713</sourcerecordid><originalsourceid>FETCH-LOGICAL-p564-4c115d4975e8b3c147e8cac0d2ebef9917b6398e324321e1e64babc4642a5a063</originalsourceid><addsrcrecordid>eNpNkMFqg0AQhpfS0oQ0r1D22IvguuOqxyCtKaS0Be8yrmNiWV27q4e-fQ1JoXOZ7_DxD_PfsHUkYhlAqOTtP16xrfdf4TIAKovhnq1kmkaJELBmxx3PbT-i67wduG15Ye3REMeh4fkJp-Kj5K11fDdPtsep07yggdxCF31PaKZT4MjgRA1_m83UjYYCfbKdJv45kz-r_oHdtWg8ba97w8qX5zLfB4f34jXfHYIxVhCAFiJuIEtiSmupBSSUatRhE1FNbZaJpFYyS0lGICNBghTUWGtQEGGMy68b9nSJHZ39Pt-u-s5rMgYHsrOvZKhAyCwRclEfr-pc99RUo-t6dD_VXzfyF954YYQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3064139713</pqid></control><display><type>article</type><title>A Comparison of Google and ChatGPT for Automatic Generation of Health-related Multiple-choice Questions</title><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><creator>Song, Vivien ; Kauchak, David ; Hamre, John ; Morgenstein, Nick ; Leroy, Gondy</creator><creatorcontrib>Song, Vivien ; Kauchak, David ; Hamre, John ; Morgenstein, Nick ; Leroy, Gondy</creatorcontrib><description>Critical to producing accessible content is an understanding of what characteristics affect understanding and comprehension. To answer this question, we are producing a large corpus of health-related texts with associated questions that can be read or listened to by study participants to measure the difficulty of the underlying content, which can later be used to better understand text difficulty and user comprehension. In this paper, we examine methods for automatically generating multiple-choice questions using Google's related questions and ChatGPT. Overall, we find both algorithms generate reasonable questions that are complementary; ChatGPT questions are more similar to the snippet while Google related-search questions have more lexical variation.</description><identifier>ISSN: 2153-4063</identifier><identifier>EISSN: 2153-4063</identifier><identifier>PMID: 38827114</identifier><language>eng</language><publisher>United States</publisher><ispartof>AMIA Summits on Translational Science proceedings, 2024, Vol.2024, p.679</ispartof><rights>2024 AMIA - All rights reserved.</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,4023</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38827114$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Song, Vivien</creatorcontrib><creatorcontrib>Kauchak, David</creatorcontrib><creatorcontrib>Hamre, John</creatorcontrib><creatorcontrib>Morgenstein, Nick</creatorcontrib><creatorcontrib>Leroy, Gondy</creatorcontrib><title>A Comparison of Google and ChatGPT for Automatic Generation of Health-related Multiple-choice Questions</title><title>AMIA Summits on Translational Science proceedings</title><addtitle>AMIA Jt Summits Transl Sci Proc</addtitle><description>Critical to producing accessible content is an understanding of what characteristics affect understanding and comprehension. To answer this question, we are producing a large corpus of health-related texts with associated questions that can be read or listened to by study participants to measure the difficulty of the underlying content, which can later be used to better understand text difficulty and user comprehension. In this paper, we examine methods for automatically generating multiple-choice questions using Google's related questions and ChatGPT. Overall, we find both algorithms generate reasonable questions that are complementary; ChatGPT questions are more similar to the snippet while Google related-search questions have more lexical variation.</description><issn>2153-4063</issn><issn>2153-4063</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkMFqg0AQhpfS0oQ0r1D22IvguuOqxyCtKaS0Be8yrmNiWV27q4e-fQ1JoXOZ7_DxD_PfsHUkYhlAqOTtP16xrfdf4TIAKovhnq1kmkaJELBmxx3PbT-i67wduG15Ye3REMeh4fkJp-Kj5K11fDdPtsep07yggdxCF31PaKZT4MjgRA1_m83UjYYCfbKdJv45kz-r_oHdtWg8ba97w8qX5zLfB4f34jXfHYIxVhCAFiJuIEtiSmupBSSUatRhE1FNbZaJpFYyS0lGICNBghTUWGtQEGGMy68b9nSJHZ39Pt-u-s5rMgYHsrOvZKhAyCwRclEfr-pc99RUo-t6dD_VXzfyF954YYQ</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Song, Vivien</creator><creator>Kauchak, David</creator><creator>Hamre, John</creator><creator>Morgenstein, Nick</creator><creator>Leroy, Gondy</creator><scope>NPM</scope><scope>7X8</scope></search><sort><creationdate>2024</creationdate><title>A Comparison of Google and ChatGPT for Automatic Generation of Health-related Multiple-choice Questions</title><author>Song, Vivien ; Kauchak, David ; Hamre, John ; Morgenstein, Nick ; Leroy, Gondy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p564-4c115d4975e8b3c147e8cac0d2ebef9917b6398e324321e1e64babc4642a5a063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Song, Vivien</creatorcontrib><creatorcontrib>Kauchak, David</creatorcontrib><creatorcontrib>Hamre, John</creatorcontrib><creatorcontrib>Morgenstein, Nick</creatorcontrib><creatorcontrib>Leroy, Gondy</creatorcontrib><collection>PubMed</collection><collection>MEDLINE - Academic</collection><jtitle>AMIA Summits on Translational Science proceedings</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Song, Vivien</au><au>Kauchak, David</au><au>Hamre, John</au><au>Morgenstein, Nick</au><au>Leroy, Gondy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Comparison of Google and ChatGPT for Automatic Generation of Health-related Multiple-choice Questions</atitle><jtitle>AMIA Summits on Translational Science proceedings</jtitle><addtitle>AMIA Jt Summits Transl Sci Proc</addtitle><date>2024</date><risdate>2024</risdate><volume>2024</volume><spage>679</spage><pages>679-</pages><issn>2153-4063</issn><eissn>2153-4063</eissn><abstract>Critical to producing accessible content is an understanding of what characteristics affect understanding and comprehension. To answer this question, we are producing a large corpus of health-related texts with associated questions that can be read or listened to by study participants to measure the difficulty of the underlying content, which can later be used to better understand text difficulty and user comprehension. In this paper, we examine methods for automatically generating multiple-choice questions using Google's related questions and ChatGPT. Overall, we find both algorithms generate reasonable questions that are complementary; ChatGPT questions are more similar to the snippet while Google related-search questions have more lexical variation.</abstract><cop>United States</cop><pmid>38827114</pmid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2153-4063 |
ispartof | AMIA Summits on Translational Science proceedings, 2024, Vol.2024, p.679 |
issn | 2153-4063 2153-4063 |
language | eng |
recordid | cdi_proquest_miscellaneous_3064139713 |
source | EZB-FREE-00999 freely available EZB journals; PubMed Central |
title | A Comparison of Google and ChatGPT for Automatic Generation of Health-related Multiple-choice Questions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T14%3A48%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Comparison%20of%20Google%20and%20ChatGPT%20for%20Automatic%20Generation%20of%20Health-related%20Multiple-choice%20Questions&rft.jtitle=AMIA%20Summits%20on%20Translational%20Science%20proceedings&rft.au=Song,%20Vivien&rft.date=2024&rft.volume=2024&rft.spage=679&rft.pages=679-&rft.issn=2153-4063&rft.eissn=2153-4063&rft_id=info:doi/&rft_dat=%3Cproquest_pubme%3E3064139713%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3064139713&rft_id=info:pmid/38827114&rfr_iscdi=true |