Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises

Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-12
Hauptverfasser: Švábenský, Valdemar, Weiss, Richard, Cook, Jack, Vykopal, Jan, Čeleda, Pavel, Mache, Jens, Chudovský, Radoslav, Chattopadhyay, Ankur
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Švábenský, Valdemar
Weiss, Richard
Cook, Jack
Vykopal, Jan
Čeleda, Pavel
Mache, Jens
Chudovský, Radoslav
Chattopadhyay, Ankur
description Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise. The progress is summarized by graph models based on the shell commands students typed to achieve discrete tasks within the exercise. We implemented two types of models and compared them using data from 46 students at two universities. To evaluate our models, we surveyed 22 experienced computing instructors and qualitatively analyzed their responses. The majority of instructors interpreted the graph models effectively and identified strengths, weaknesses, and assessment use cases for each model. Based on the evaluation, we provide recommendations to instructors and explain how our graph models innovate teaching and promote further research. The impact of this paper is threefold. First, it demonstrates how multiple institutions can collaborate to share approaches to modeling student progress in hands-on exercises. Second, our modeling techniques generalize to data from different environments to support student assessment, even outside the cybersecurity domain. Third, we share the acquired data and open-source software so that others can use the models in their classes or research.
doi_str_mv 10.48550/arxiv.2112.02053
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2112_02053</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2607083605</sourcerecordid><originalsourceid>FETCH-LOGICAL-a525-346cbe19cc563bcb79ae900ae44385dd18b40a4f7daf3b16fad1e1942c1c27fc3</originalsourceid><addsrcrecordid>eNotj11LwzAUhoMgOOZ-gFcGvG7Nd9vLUeoHDBTtfUnSdGbMtibpXP-92ebV4Zzz8PI-ANxhlLKcc_Qo3dEeUoIxSRFBnF6BBaEUJzkj5AasvN8hhIjICOd0AT6qg9xPMth-C-vfAa7H0Q1SfxkPQ9y8N96ffp9hak0f4Lsbti7eoO1hOSvjvNGTs2GG1dE4bSN_C647ufdm9T-XoH6q6vIl2bw9v5brTSI54QllQiuDC625oEqrrJCmQEgaxmjO2xbniiHJuqyVHVVYdLLFEWdEY02yTtMluL_EnoWb0dlv6ebmJN6cxSPxcCGi0s9kfGh2w-T62KkhAmUopyJif_AZXKM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2607083605</pqid></control><display><type>article</type><title>Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Švábenský, Valdemar ; Weiss, Richard ; Cook, Jack ; Vykopal, Jan ; Čeleda, Pavel ; Mache, Jens ; Chudovský, Radoslav ; Chattopadhyay, Ankur</creator><creatorcontrib>Švábenský, Valdemar ; Weiss, Richard ; Cook, Jack ; Vykopal, Jan ; Čeleda, Pavel ; Mache, Jens ; Chudovský, Radoslav ; Chattopadhyay, Ankur</creatorcontrib><description>Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise. The progress is summarized by graph models based on the shell commands students typed to achieve discrete tasks within the exercise. We implemented two types of models and compared them using data from 46 students at two universities. To evaluate our models, we surveyed 22 experienced computing instructors and qualitatively analyzed their responses. The majority of instructors interpreted the graph models effectively and identified strengths, weaknesses, and assessment use cases for each model. Based on the evaluation, we provide recommendations to instructors and explain how our graph models innovate teaching and promote further research. The impact of this paper is threefold. First, it demonstrates how multiple institutions can collaborate to share approaches to modeling student progress in hands-on exercises. Second, our modeling techniques generalize to data from different environments to support student assessment, even outside the cybersecurity domain. Third, we share the acquired data and open-source software so that others can use the models in their classes or research.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2112.02053</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Colleges &amp; universities ; Computer Science - Computers and Society ; Computer Science - Cryptography and Security ; Cybersecurity ; Data acquisition ; Evaluation ; Skills ; Source code ; Students ; Teachers</subject><ispartof>arXiv.org, 2021-12</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,781,882,27906</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2112.02053$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3478431.3499414$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Švábenský, Valdemar</creatorcontrib><creatorcontrib>Weiss, Richard</creatorcontrib><creatorcontrib>Cook, Jack</creatorcontrib><creatorcontrib>Vykopal, Jan</creatorcontrib><creatorcontrib>Čeleda, Pavel</creatorcontrib><creatorcontrib>Mache, Jens</creatorcontrib><creatorcontrib>Chudovský, Radoslav</creatorcontrib><creatorcontrib>Chattopadhyay, Ankur</creatorcontrib><title>Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises</title><title>arXiv.org</title><description>Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise. The progress is summarized by graph models based on the shell commands students typed to achieve discrete tasks within the exercise. We implemented two types of models and compared them using data from 46 students at two universities. To evaluate our models, we surveyed 22 experienced computing instructors and qualitatively analyzed their responses. The majority of instructors interpreted the graph models effectively and identified strengths, weaknesses, and assessment use cases for each model. Based on the evaluation, we provide recommendations to instructors and explain how our graph models innovate teaching and promote further research. The impact of this paper is threefold. First, it demonstrates how multiple institutions can collaborate to share approaches to modeling student progress in hands-on exercises. Second, our modeling techniques generalize to data from different environments to support student assessment, even outside the cybersecurity domain. Third, we share the acquired data and open-source software so that others can use the models in their classes or research.</description><subject>Colleges &amp; universities</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Cryptography and Security</subject><subject>Cybersecurity</subject><subject>Data acquisition</subject><subject>Evaluation</subject><subject>Skills</subject><subject>Source code</subject><subject>Students</subject><subject>Teachers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj11LwzAUhoMgOOZ-gFcGvG7Nd9vLUeoHDBTtfUnSdGbMtibpXP-92ebV4Zzz8PI-ANxhlLKcc_Qo3dEeUoIxSRFBnF6BBaEUJzkj5AasvN8hhIjICOd0AT6qg9xPMth-C-vfAa7H0Q1SfxkPQ9y8N96ffp9hak0f4Lsbti7eoO1hOSvjvNGTs2GG1dE4bSN_C647ufdm9T-XoH6q6vIl2bw9v5brTSI54QllQiuDC625oEqrrJCmQEgaxmjO2xbniiHJuqyVHVVYdLLFEWdEY02yTtMluL_EnoWb0dlv6ebmJN6cxSPxcCGi0s9kfGh2w-T62KkhAmUopyJif_AZXKM</recordid><startdate>20211203</startdate><enddate>20211203</enddate><creator>Švábenský, Valdemar</creator><creator>Weiss, Richard</creator><creator>Cook, Jack</creator><creator>Vykopal, Jan</creator><creator>Čeleda, Pavel</creator><creator>Mache, Jens</creator><creator>Chudovský, Radoslav</creator><creator>Chattopadhyay, Ankur</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211203</creationdate><title>Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises</title><author>Švábenský, Valdemar ; Weiss, Richard ; Cook, Jack ; Vykopal, Jan ; Čeleda, Pavel ; Mache, Jens ; Chudovský, Radoslav ; Chattopadhyay, Ankur</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a525-346cbe19cc563bcb79ae900ae44385dd18b40a4f7daf3b16fad1e1942c1c27fc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Colleges &amp; universities</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Cryptography and Security</topic><topic>Cybersecurity</topic><topic>Data acquisition</topic><topic>Evaluation</topic><topic>Skills</topic><topic>Source code</topic><topic>Students</topic><topic>Teachers</topic><toplevel>online_resources</toplevel><creatorcontrib>Švábenský, Valdemar</creatorcontrib><creatorcontrib>Weiss, Richard</creatorcontrib><creatorcontrib>Cook, Jack</creatorcontrib><creatorcontrib>Vykopal, Jan</creatorcontrib><creatorcontrib>Čeleda, Pavel</creatorcontrib><creatorcontrib>Mache, Jens</creatorcontrib><creatorcontrib>Chudovský, Radoslav</creatorcontrib><creatorcontrib>Chattopadhyay, Ankur</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Švábenský, Valdemar</au><au>Weiss, Richard</au><au>Cook, Jack</au><au>Vykopal, Jan</au><au>Čeleda, Pavel</au><au>Mache, Jens</au><au>Chudovský, Radoslav</au><au>Chattopadhyay, Ankur</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises</atitle><jtitle>arXiv.org</jtitle><date>2021-12-03</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise. The progress is summarized by graph models based on the shell commands students typed to achieve discrete tasks within the exercise. We implemented two types of models and compared them using data from 46 students at two universities. To evaluate our models, we surveyed 22 experienced computing instructors and qualitatively analyzed their responses. The majority of instructors interpreted the graph models effectively and identified strengths, weaknesses, and assessment use cases for each model. Based on the evaluation, we provide recommendations to instructors and explain how our graph models innovate teaching and promote further research. The impact of this paper is threefold. First, it demonstrates how multiple institutions can collaborate to share approaches to modeling student progress in hands-on exercises. Second, our modeling techniques generalize to data from different environments to support student assessment, even outside the cybersecurity domain. Third, we share the acquired data and open-source software so that others can use the models in their classes or research.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2112.02053</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-12
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2112_02053
source arXiv.org; Free E- Journals
subjects Colleges & universities
Computer Science - Computers and Society
Computer Science - Cryptography and Security
Cybersecurity
Data acquisition
Evaluation
Skills
Source code
Students
Teachers
title Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T20%3A57%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluating%20Two%20Approaches%20to%20Assessing%20Student%20Progress%20in%20Cybersecurity%20Exercises&rft.jtitle=arXiv.org&rft.au=%C5%A0v%C3%A1bensk%C3%BD,%20Valdemar&rft.date=2021-12-03&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2112.02053&rft_dat=%3Cproquest_arxiv%3E2607083605%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2607083605&rft_id=info:pmid/&rfr_iscdi=true