The assessment of collaborative problem solving in PISA 2015: Can computer agents replace humans?

Despite the relevance of collaborative problem solving (CPS), there are limited empirical results on the assessment of CPS. In 2015, the large-scale Programme for International Student Assessment (PISA) first assessed CPS with virtual tasks requiring participants to collaborate with computer-simulat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in human behavior 2020-03, Vol.104, p.105624, Article 105624
Hauptverfasser: Herborn, Katharina, Stadler, Matthias, Mustafić, Maida, Greiff, Samuel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 105624
container_title Computers in human behavior
container_volume 104
creator Herborn, Katharina
Stadler, Matthias
Mustafić, Maida
Greiff, Samuel
description Despite the relevance of collaborative problem solving (CPS), there are limited empirical results on the assessment of CPS. In 2015, the large-scale Programme for International Student Assessment (PISA) first assessed CPS with virtual tasks requiring participants to collaborate with computer-simulated agents (human-to-agent; H-A). The approach created dynamic CPS situations while standardizing assessment conditions across participating countries. However, H-A approaches are sometimes regarded as poor substitutes for natural collaboration, and only a few studies have identified if the collaborations with agents capture real dynamics of human interactions. To address this, we validated the original PISA 2015 CPS assessment by investigating the effects of replacing computer agents with real students in classroom tests (human-to-human; H-H). We obtained the original PISA 2015 CPS tasks from the OECD and replaced agents with real students to provide more real-life collaboration environments with less control over conversations; the H-H was less constrained than the H-A but still limited by predefined sets of possible answers from which the humans’ would make selections. The interface remained nearly identical to the original PISA 2015 CPS assessment. Students were told the types of collaboration partners, namely humans versus agents. We applied structural equation modeling and multivariate analyses of variance to a sample of 386 students to identify the dimensionality of the CPS construct and compared the effects in CPS performance accuracy and number of behavioral actions. Results indicated no significant differences between type of collaboration partner. However, students performed a larger number of actions when collaborating with a human agent. •We validated the original PISA 2015 Collaborative Problem Solving tasks.•We found no significant differences per type of collaboration partner (agents or classmates).•Students performed a larger number of actions when collaborating with classmates.
doi_str_mv 10.1016/j.chb.2018.07.035
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2353610787</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0747563218303571</els_id><sourcerecordid>2353610787</sourcerecordid><originalsourceid>FETCH-LOGICAL-c325t-244a1bf7627ae99e31cbb1b3926046ef60b80109bcd615c1b46daef4ca68d7163</originalsourceid><addsrcrecordid>eNp9kE1LwzAYx4MoOKcfwFvAc-uTpk1aPcgYvgwGCs5zSNKnW0tfZtIO_PZmzLOn_-X_xo-QWwYxAybum9juTJwAy2OQMfDsjMxYLnkkRZGckxnIVEaZ4MklufK-AYAsAzEjerNDqr1H7zvsRzpU1A5tq83g9FgfkO7dYFrsqB_aQ91vad3Tj9Xngoap7IEudR_83X4a0VG9DQ2eOty32iLdTZ3u_dM1uah06_HmT-fk6-V5s3yL1u-vq-ViHVmeZGOUpKlmppIikRqLAjmzxjDDi0RAKrASYHJgUBhbCpZZZlJRaqxSq0VeSib4nNydesPj7wn9qJphcn2YVAnPuGAgA485YSeXdYP3Diu1d3Wn3Y9ioI4kVaMCSXUkqUCqQDJkHk8ZDPcPNTrlbY29xbJ2aEdVDvU_6V8PN3rd</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2353610787</pqid></control><display><type>article</type><title>The assessment of collaborative problem solving in PISA 2015: Can computer agents replace humans?</title><source>Elsevier ScienceDirect Journals</source><creator>Herborn, Katharina ; Stadler, Matthias ; Mustafić, Maida ; Greiff, Samuel</creator><creatorcontrib>Herborn, Katharina ; Stadler, Matthias ; Mustafić, Maida ; Greiff, Samuel</creatorcontrib><description>Despite the relevance of collaborative problem solving (CPS), there are limited empirical results on the assessment of CPS. In 2015, the large-scale Programme for International Student Assessment (PISA) first assessed CPS with virtual tasks requiring participants to collaborate with computer-simulated agents (human-to-agent; H-A). The approach created dynamic CPS situations while standardizing assessment conditions across participating countries. However, H-A approaches are sometimes regarded as poor substitutes for natural collaboration, and only a few studies have identified if the collaborations with agents capture real dynamics of human interactions. To address this, we validated the original PISA 2015 CPS assessment by investigating the effects of replacing computer agents with real students in classroom tests (human-to-human; H-H). We obtained the original PISA 2015 CPS tasks from the OECD and replaced agents with real students to provide more real-life collaboration environments with less control over conversations; the H-H was less constrained than the H-A but still limited by predefined sets of possible answers from which the humans’ would make selections. The interface remained nearly identical to the original PISA 2015 CPS assessment. Students were told the types of collaboration partners, namely humans versus agents. We applied structural equation modeling and multivariate analyses of variance to a sample of 386 students to identify the dimensionality of the CPS construct and compared the effects in CPS performance accuracy and number of behavioral actions. Results indicated no significant differences between type of collaboration partner. However, students performed a larger number of actions when collaborating with a human agent. •We validated the original PISA 2015 Collaborative Problem Solving tasks.•We found no significant differences per type of collaboration partner (agents or classmates).•Students performed a larger number of actions when collaborating with classmates.</description><identifier>ISSN: 0747-5632</identifier><identifier>EISSN: 1873-7692</identifier><identifier>DOI: 10.1016/j.chb.2018.07.035</identifier><language>eng</language><publisher>Elmsford: Elsevier Ltd</publisher><subject>Agent technologies ; Assessment ; Collaboration ; Collaborative problem solving ; Computer simulation ; Empirical analysis ; Multivariate statistical analysis ; PISA 2015 ; Problem solving ; Students ; Validation ; Variance analysis</subject><ispartof>Computers in human behavior, 2020-03, Vol.104, p.105624, Article 105624</ispartof><rights>2018 Elsevier Ltd</rights><rights>Copyright Elsevier Science Ltd. Mar 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c325t-244a1bf7627ae99e31cbb1b3926046ef60b80109bcd615c1b46daef4ca68d7163</citedby><cites>FETCH-LOGICAL-c325t-244a1bf7627ae99e31cbb1b3926046ef60b80109bcd615c1b46daef4ca68d7163</cites><orcidid>0000-0001-8241-8723 ; 0000-0003-2900-3734</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.chb.2018.07.035$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,777,781,3537,27905,27906,45976</link.rule.ids></links><search><creatorcontrib>Herborn, Katharina</creatorcontrib><creatorcontrib>Stadler, Matthias</creatorcontrib><creatorcontrib>Mustafić, Maida</creatorcontrib><creatorcontrib>Greiff, Samuel</creatorcontrib><title>The assessment of collaborative problem solving in PISA 2015: Can computer agents replace humans?</title><title>Computers in human behavior</title><description>Despite the relevance of collaborative problem solving (CPS), there are limited empirical results on the assessment of CPS. In 2015, the large-scale Programme for International Student Assessment (PISA) first assessed CPS with virtual tasks requiring participants to collaborate with computer-simulated agents (human-to-agent; H-A). The approach created dynamic CPS situations while standardizing assessment conditions across participating countries. However, H-A approaches are sometimes regarded as poor substitutes for natural collaboration, and only a few studies have identified if the collaborations with agents capture real dynamics of human interactions. To address this, we validated the original PISA 2015 CPS assessment by investigating the effects of replacing computer agents with real students in classroom tests (human-to-human; H-H). We obtained the original PISA 2015 CPS tasks from the OECD and replaced agents with real students to provide more real-life collaboration environments with less control over conversations; the H-H was less constrained than the H-A but still limited by predefined sets of possible answers from which the humans’ would make selections. The interface remained nearly identical to the original PISA 2015 CPS assessment. Students were told the types of collaboration partners, namely humans versus agents. We applied structural equation modeling and multivariate analyses of variance to a sample of 386 students to identify the dimensionality of the CPS construct and compared the effects in CPS performance accuracy and number of behavioral actions. Results indicated no significant differences between type of collaboration partner. However, students performed a larger number of actions when collaborating with a human agent. •We validated the original PISA 2015 Collaborative Problem Solving tasks.•We found no significant differences per type of collaboration partner (agents or classmates).•Students performed a larger number of actions when collaborating with classmates.</description><subject>Agent technologies</subject><subject>Assessment</subject><subject>Collaboration</subject><subject>Collaborative problem solving</subject><subject>Computer simulation</subject><subject>Empirical analysis</subject><subject>Multivariate statistical analysis</subject><subject>PISA 2015</subject><subject>Problem solving</subject><subject>Students</subject><subject>Validation</subject><subject>Variance analysis</subject><issn>0747-5632</issn><issn>1873-7692</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LwzAYx4MoOKcfwFvAc-uTpk1aPcgYvgwGCs5zSNKnW0tfZtIO_PZmzLOn_-X_xo-QWwYxAybum9juTJwAy2OQMfDsjMxYLnkkRZGckxnIVEaZ4MklufK-AYAsAzEjerNDqr1H7zvsRzpU1A5tq83g9FgfkO7dYFrsqB_aQ91vad3Tj9Xngoap7IEudR_83X4a0VG9DQ2eOty32iLdTZ3u_dM1uah06_HmT-fk6-V5s3yL1u-vq-ViHVmeZGOUpKlmppIikRqLAjmzxjDDi0RAKrASYHJgUBhbCpZZZlJRaqxSq0VeSib4nNydesPj7wn9qJphcn2YVAnPuGAgA485YSeXdYP3Diu1d3Wn3Y9ioI4kVaMCSXUkqUCqQDJkHk8ZDPcPNTrlbY29xbJ2aEdVDvU_6V8PN3rd</recordid><startdate>202003</startdate><enddate>202003</enddate><creator>Herborn, Katharina</creator><creator>Stadler, Matthias</creator><creator>Mustafić, Maida</creator><creator>Greiff, Samuel</creator><general>Elsevier Ltd</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-8241-8723</orcidid><orcidid>https://orcid.org/0000-0003-2900-3734</orcidid></search><sort><creationdate>202003</creationdate><title>The assessment of collaborative problem solving in PISA 2015: Can computer agents replace humans?</title><author>Herborn, Katharina ; Stadler, Matthias ; Mustafić, Maida ; Greiff, Samuel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c325t-244a1bf7627ae99e31cbb1b3926046ef60b80109bcd615c1b46daef4ca68d7163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Agent technologies</topic><topic>Assessment</topic><topic>Collaboration</topic><topic>Collaborative problem solving</topic><topic>Computer simulation</topic><topic>Empirical analysis</topic><topic>Multivariate statistical analysis</topic><topic>PISA 2015</topic><topic>Problem solving</topic><topic>Students</topic><topic>Validation</topic><topic>Variance analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Herborn, Katharina</creatorcontrib><creatorcontrib>Stadler, Matthias</creatorcontrib><creatorcontrib>Mustafić, Maida</creatorcontrib><creatorcontrib>Greiff, Samuel</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computers in human behavior</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Herborn, Katharina</au><au>Stadler, Matthias</au><au>Mustafić, Maida</au><au>Greiff, Samuel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The assessment of collaborative problem solving in PISA 2015: Can computer agents replace humans?</atitle><jtitle>Computers in human behavior</jtitle><date>2020-03</date><risdate>2020</risdate><volume>104</volume><spage>105624</spage><pages>105624-</pages><artnum>105624</artnum><issn>0747-5632</issn><eissn>1873-7692</eissn><abstract>Despite the relevance of collaborative problem solving (CPS), there are limited empirical results on the assessment of CPS. In 2015, the large-scale Programme for International Student Assessment (PISA) first assessed CPS with virtual tasks requiring participants to collaborate with computer-simulated agents (human-to-agent; H-A). The approach created dynamic CPS situations while standardizing assessment conditions across participating countries. However, H-A approaches are sometimes regarded as poor substitutes for natural collaboration, and only a few studies have identified if the collaborations with agents capture real dynamics of human interactions. To address this, we validated the original PISA 2015 CPS assessment by investigating the effects of replacing computer agents with real students in classroom tests (human-to-human; H-H). We obtained the original PISA 2015 CPS tasks from the OECD and replaced agents with real students to provide more real-life collaboration environments with less control over conversations; the H-H was less constrained than the H-A but still limited by predefined sets of possible answers from which the humans’ would make selections. The interface remained nearly identical to the original PISA 2015 CPS assessment. Students were told the types of collaboration partners, namely humans versus agents. We applied structural equation modeling and multivariate analyses of variance to a sample of 386 students to identify the dimensionality of the CPS construct and compared the effects in CPS performance accuracy and number of behavioral actions. Results indicated no significant differences between type of collaboration partner. However, students performed a larger number of actions when collaborating with a human agent. •We validated the original PISA 2015 Collaborative Problem Solving tasks.•We found no significant differences per type of collaboration partner (agents or classmates).•Students performed a larger number of actions when collaborating with classmates.</abstract><cop>Elmsford</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.chb.2018.07.035</doi><orcidid>https://orcid.org/0000-0001-8241-8723</orcidid><orcidid>https://orcid.org/0000-0003-2900-3734</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0747-5632
ispartof Computers in human behavior, 2020-03, Vol.104, p.105624, Article 105624
issn 0747-5632
1873-7692
language eng
recordid cdi_proquest_journals_2353610787
source Elsevier ScienceDirect Journals
subjects Agent technologies
Assessment
Collaboration
Collaborative problem solving
Computer simulation
Empirical analysis
Multivariate statistical analysis
PISA 2015
Problem solving
Students
Validation
Variance analysis
title The assessment of collaborative problem solving in PISA 2015: Can computer agents replace humans?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T11%3A07%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20assessment%20of%20collaborative%20problem%20solving%20in%20PISA%202015:%20Can%20computer%20agents%20replace%20humans?&rft.jtitle=Computers%20in%20human%20behavior&rft.au=Herborn,%20Katharina&rft.date=2020-03&rft.volume=104&rft.spage=105624&rft.pages=105624-&rft.artnum=105624&rft.issn=0747-5632&rft.eissn=1873-7692&rft_id=info:doi/10.1016/j.chb.2018.07.035&rft_dat=%3Cproquest_cross%3E2353610787%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2353610787&rft_id=info:pmid/&rft_els_id=S0747563218303571&rfr_iscdi=true