From {Solution Synthesis} to {Student Attempt Synthesis} for Block-Based Visual Programming Tasks
Block-based visual programming environments are increasingly used to introduce computing concepts to beginners. Given that programming tasks are open-ended and conceptual, novice students often struggle when learning in these environments. AI-driven programming tutors hold great promise in automatic...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Block-based visual programming environments are increasingly used to
introduce computing concepts to beginners. Given that programming tasks are
open-ended and conceptual, novice students often struggle when learning in
these environments. AI-driven programming tutors hold great promise in
automatically assisting struggling students, and need several components to
realize this potential. We investigate the crucial component of student
modeling, in particular, the ability to automatically infer students'
misconceptions for predicting (synthesizing) their behavior. We introduce a
novel benchmark, StudentSyn, centered around the following challenge: For a
given student, synthesize the student's attempt on a new target task after
observing the student's attempt on a fixed reference task. This challenge is
akin to that of program synthesis; however, instead of synthesizing a
{solution} (i.e., program an expert would write), the goal here is to
synthesize a {student attempt} (i.e., program that a given student would
write). We first show that human experts (TutorSS) can achieve high performance
on the benchmark, whereas simple baselines perform poorly. Then, we develop two
neuro/symbolic techniques (NeurSS and SymSS) in a quest to close this gap with
TutorSS. |
---|---|
DOI: | 10.48550/arxiv.2205.01265 |