Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling

In this study, we developed machine learning algorithms to automatically score students’ written arguments and then applied the cognitive diagnostic modeling (CDM) approach to examine students’ cognitive patterns of scientific argumentation. We abstracted three types of skills (i.e., attributes) cri...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Research in science education (Australasian Science Education Research Association) 2023-04, Vol.53 (2), p.405-424
Hauptverfasser: Zhai, Xiaoming, Haudek, Kevin C., Ma, Wenchao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 424
container_issue 2
container_start_page 405
container_title Research in science education (Australasian Science Education Research Association)
container_volume 53
creator Zhai, Xiaoming
Haudek, Kevin C.
Ma, Wenchao
description In this study, we developed machine learning algorithms to automatically score students’ written arguments and then applied the cognitive diagnostic modeling (CDM) approach to examine students’ cognitive patterns of scientific argumentation. We abstracted three types of skills (i.e., attributes) critical for successful argumentation practice: making claims, using evidence, and providing warrants. We developed 19 constructed response items, with each item requiring multiple cognitive skills. We collected responses from 932 students in Grades 5 to 8 and developed machine learning algorithmic models to automatically score their responses. We then applied CDM to analyze their cognitive patterns. Results indicate that machine scoring achieved the average machine–human agreements of Cohen’s κ = 0.73, SD  = 0.09. We found that students were clustered in 21 groups based on their argumentation performance, each revealing a different cognitive pattern. Within each group, students showed different abilities regarding making claims, using evidence, and providing warrants to justify how the evidence supports a claim. The 9 most frequent groups accounted for more than 70% of the students in the study. Our in-depth analysis of individual students suggests that students with the same total ability score might vary in the specific cognitive skills required to accomplish argumentation. This result illustrates the advantage of CDM in assessing the fine-grained cognition of students during argumentation practices in science and other scientific practices.
doi_str_mv 10.1007/s11165-022-10062-w
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2785663355</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ericid>EJ1370960</ericid><sourcerecordid>2785663355</sourcerecordid><originalsourceid>FETCH-LOGICAL-c341t-4858ff713d47128bae77eb84b7ca6043eb3a41351a7c9f0e2227f39e521a66f03</originalsourceid><addsrcrecordid>eNp9UMtKAzEUDaJgrf6AIAy4jubmObMstb6o6MKCu5CZZsaUNlOTqcW_N-2I7lxdzj0vOAidA7kCQtR1BAApMKEUJywp3h6gAQjFMORFfogGJAFMOX87RicxLghhIBUboJdRjDZG55tsFJrNyvrOdK712Wz_ezLVu_M2m1oT_O5h_Dwbt413nfu02Y0zjW9j56rsqZ3bZVKcoqPaLKM9-7lDNLudvI7v8fT57mE8muKKcegwz0Ve1wrYnCugeWmsUrbMeakqIwlntmSGAxNgVFXUxFJKVc0KKygYKWvChuiyz12H9mNjY6cX7Sb4VKmpyoWUjAmRVLRXVaGNMdhar4NbmfClgejdcrpfTqfl9H45vU2mi95kg6t-DZNHYIoUclfNej4mzjc2_FX_k_oNK4Z6sw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2785663355</pqid></control><display><type>article</type><title>Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling</title><source>Education Source (EBSCOhost)</source><source>SpringerLink Journals</source><creator>Zhai, Xiaoming ; Haudek, Kevin C. ; Ma, Wenchao</creator><creatorcontrib>Zhai, Xiaoming ; Haudek, Kevin C. ; Ma, Wenchao</creatorcontrib><description>In this study, we developed machine learning algorithms to automatically score students’ written arguments and then applied the cognitive diagnostic modeling (CDM) approach to examine students’ cognitive patterns of scientific argumentation. We abstracted three types of skills (i.e., attributes) critical for successful argumentation practice: making claims, using evidence, and providing warrants. We developed 19 constructed response items, with each item requiring multiple cognitive skills. We collected responses from 932 students in Grades 5 to 8 and developed machine learning algorithmic models to automatically score their responses. We then applied CDM to analyze their cognitive patterns. Results indicate that machine scoring achieved the average machine–human agreements of Cohen’s κ = 0.73, SD  = 0.09. We found that students were clustered in 21 groups based on their argumentation performance, each revealing a different cognitive pattern. Within each group, students showed different abilities regarding making claims, using evidence, and providing warrants to justify how the evidence supports a claim. The 9 most frequent groups accounted for more than 70% of the students in the study. Our in-depth analysis of individual students suggests that students with the same total ability score might vary in the specific cognitive skills required to accomplish argumentation. This result illustrates the advantage of CDM in assessing the fine-grained cognition of students during argumentation practices in science and other scientific practices.</description><identifier>ISSN: 0157-244X</identifier><identifier>EISSN: 1573-1898</identifier><identifier>DOI: 10.1007/s11165-022-10062-w</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Algorithms ; Artificial Intelligence ; Cognition ; Cognition &amp; reasoning ; Cognitive ability ; Cognitive Measurement ; Diagnostic systems ; Diagnostic Tests ; Education ; Elementary School Students ; Learning algorithms ; Machine learning ; Middle School Students ; Modelling ; Persuasive Discourse ; Science Education ; Science Process Skills ; Scores ; Skills ; Students ; Thinking Skills ; Writing (Composition)</subject><ispartof>Research in science education (Australasian Science Education Research Association), 2023-04, Vol.53 (2), p.405-424</ispartof><rights>The Author(s), under exclusive licence to Springer Nature B.V. 2022</rights><rights>The Author(s), under exclusive licence to Springer Nature B.V. 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c341t-4858ff713d47128bae77eb84b7ca6043eb3a41351a7c9f0e2227f39e521a66f03</citedby><cites>FETCH-LOGICAL-c341t-4858ff713d47128bae77eb84b7ca6043eb3a41351a7c9f0e2227f39e521a66f03</cites><orcidid>0000-0003-4519-1931 ; 0000-0003-1422-6038</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11165-022-10062-w$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11165-022-10062-w$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ1370960$$DView record in ERIC$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhai, Xiaoming</creatorcontrib><creatorcontrib>Haudek, Kevin C.</creatorcontrib><creatorcontrib>Ma, Wenchao</creatorcontrib><title>Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling</title><title>Research in science education (Australasian Science Education Research Association)</title><addtitle>Res Sci Educ</addtitle><description>In this study, we developed machine learning algorithms to automatically score students’ written arguments and then applied the cognitive diagnostic modeling (CDM) approach to examine students’ cognitive patterns of scientific argumentation. We abstracted three types of skills (i.e., attributes) critical for successful argumentation practice: making claims, using evidence, and providing warrants. We developed 19 constructed response items, with each item requiring multiple cognitive skills. We collected responses from 932 students in Grades 5 to 8 and developed machine learning algorithmic models to automatically score their responses. We then applied CDM to analyze their cognitive patterns. Results indicate that machine scoring achieved the average machine–human agreements of Cohen’s κ = 0.73, SD  = 0.09. We found that students were clustered in 21 groups based on their argumentation performance, each revealing a different cognitive pattern. Within each group, students showed different abilities regarding making claims, using evidence, and providing warrants to justify how the evidence supports a claim. The 9 most frequent groups accounted for more than 70% of the students in the study. Our in-depth analysis of individual students suggests that students with the same total ability score might vary in the specific cognitive skills required to accomplish argumentation. This result illustrates the advantage of CDM in assessing the fine-grained cognition of students during argumentation practices in science and other scientific practices.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Cognition</subject><subject>Cognition &amp; reasoning</subject><subject>Cognitive ability</subject><subject>Cognitive Measurement</subject><subject>Diagnostic systems</subject><subject>Diagnostic Tests</subject><subject>Education</subject><subject>Elementary School Students</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Middle School Students</subject><subject>Modelling</subject><subject>Persuasive Discourse</subject><subject>Science Education</subject><subject>Science Process Skills</subject><subject>Scores</subject><subject>Skills</subject><subject>Students</subject><subject>Thinking Skills</subject><subject>Writing (Composition)</subject><issn>0157-244X</issn><issn>1573-1898</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9UMtKAzEUDaJgrf6AIAy4jubmObMstb6o6MKCu5CZZsaUNlOTqcW_N-2I7lxdzj0vOAidA7kCQtR1BAApMKEUJywp3h6gAQjFMORFfogGJAFMOX87RicxLghhIBUboJdRjDZG55tsFJrNyvrOdK712Wz_ezLVu_M2m1oT_O5h_Dwbt413nfu02Y0zjW9j56rsqZ3bZVKcoqPaLKM9-7lDNLudvI7v8fT57mE8muKKcegwz0Ve1wrYnCugeWmsUrbMeakqIwlntmSGAxNgVFXUxFJKVc0KKygYKWvChuiyz12H9mNjY6cX7Sb4VKmpyoWUjAmRVLRXVaGNMdhar4NbmfClgejdcrpfTqfl9H45vU2mi95kg6t-DZNHYIoUclfNej4mzjc2_FX_k_oNK4Z6sw</recordid><startdate>20230401</startdate><enddate>20230401</enddate><creator>Zhai, Xiaoming</creator><creator>Haudek, Kevin C.</creator><creator>Ma, Wenchao</creator><general>Springer Netherlands</general><general>Springer</general><general>Springer Nature B.V</general><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-4519-1931</orcidid><orcidid>https://orcid.org/0000-0003-1422-6038</orcidid></search><sort><creationdate>20230401</creationdate><title>Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling</title><author>Zhai, Xiaoming ; Haudek, Kevin C. ; Ma, Wenchao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c341t-4858ff713d47128bae77eb84b7ca6043eb3a41351a7c9f0e2227f39e521a66f03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Cognition</topic><topic>Cognition &amp; reasoning</topic><topic>Cognitive ability</topic><topic>Cognitive Measurement</topic><topic>Diagnostic systems</topic><topic>Diagnostic Tests</topic><topic>Education</topic><topic>Elementary School Students</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Middle School Students</topic><topic>Modelling</topic><topic>Persuasive Discourse</topic><topic>Science Education</topic><topic>Science Process Skills</topic><topic>Scores</topic><topic>Skills</topic><topic>Students</topic><topic>Thinking Skills</topic><topic>Writing (Composition)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhai, Xiaoming</creatorcontrib><creatorcontrib>Haudek, Kevin C.</creatorcontrib><creatorcontrib>Ma, Wenchao</creatorcontrib><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>CrossRef</collection><jtitle>Research in science education (Australasian Science Education Research Association)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhai, Xiaoming</au><au>Haudek, Kevin C.</au><au>Ma, Wenchao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ1370960</ericid><atitle>Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling</atitle><jtitle>Research in science education (Australasian Science Education Research Association)</jtitle><stitle>Res Sci Educ</stitle><date>2023-04-01</date><risdate>2023</risdate><volume>53</volume><issue>2</issue><spage>405</spage><epage>424</epage><pages>405-424</pages><issn>0157-244X</issn><eissn>1573-1898</eissn><abstract>In this study, we developed machine learning algorithms to automatically score students’ written arguments and then applied the cognitive diagnostic modeling (CDM) approach to examine students’ cognitive patterns of scientific argumentation. We abstracted three types of skills (i.e., attributes) critical for successful argumentation practice: making claims, using evidence, and providing warrants. We developed 19 constructed response items, with each item requiring multiple cognitive skills. We collected responses from 932 students in Grades 5 to 8 and developed machine learning algorithmic models to automatically score their responses. We then applied CDM to analyze their cognitive patterns. Results indicate that machine scoring achieved the average machine–human agreements of Cohen’s κ = 0.73, SD  = 0.09. We found that students were clustered in 21 groups based on their argumentation performance, each revealing a different cognitive pattern. Within each group, students showed different abilities regarding making claims, using evidence, and providing warrants to justify how the evidence supports a claim. The 9 most frequent groups accounted for more than 70% of the students in the study. Our in-depth analysis of individual students suggests that students with the same total ability score might vary in the specific cognitive skills required to accomplish argumentation. This result illustrates the advantage of CDM in assessing the fine-grained cognition of students during argumentation practices in science and other scientific practices.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s11165-022-10062-w</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0003-4519-1931</orcidid><orcidid>https://orcid.org/0000-0003-1422-6038</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0157-244X
ispartof Research in science education (Australasian Science Education Research Association), 2023-04, Vol.53 (2), p.405-424
issn 0157-244X
1573-1898
language eng
recordid cdi_proquest_journals_2785663355
source Education Source (EBSCOhost); SpringerLink Journals
subjects Algorithms
Artificial Intelligence
Cognition
Cognition & reasoning
Cognitive ability
Cognitive Measurement
Diagnostic systems
Diagnostic Tests
Education
Elementary School Students
Learning algorithms
Machine learning
Middle School Students
Modelling
Persuasive Discourse
Science Education
Science Process Skills
Scores
Skills
Students
Thinking Skills
Writing (Composition)
title Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T12%3A23%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Assessing%20Argumentation%20Using%20Machine%20Learning%20and%20Cognitive%20Diagnostic%20Modeling&rft.jtitle=Research%20in%20science%20education%20(Australasian%20Science%20Education%20Research%20Association)&rft.au=Zhai,%20Xiaoming&rft.date=2023-04-01&rft.volume=53&rft.issue=2&rft.spage=405&rft.epage=424&rft.pages=405-424&rft.issn=0157-244X&rft.eissn=1573-1898&rft_id=info:doi/10.1007/s11165-022-10062-w&rft_dat=%3Cproquest_cross%3E2785663355%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2785663355&rft_id=info:pmid/&rft_ericid=EJ1370960&rfr_iscdi=true