Building socially responsible conversational agents using big data to support online learning: A case with Algebra Nation
A discussion forum is a valuable tool to support student learning in online contexts. However, interactions in online discussion forums are sparse, leading to other issues such as low engagement and dropping out. Recent educational studies have examined the affordances of conversational agents (CA)...
Gespeichert in:
Veröffentlicht in: | British journal of educational technology 2022-07, Vol.53 (4), p.776-803 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A discussion forum is a valuable tool to support student learning in online contexts. However, interactions in online discussion forums are sparse, leading to other issues such as low engagement and dropping out. Recent educational studies have examined the affordances of conversational agents (CA) powered by artificial intelligence (AI) to automatically support student participation in discussion forums. However, few studies have paid attention to the safety of CAs. This study aimed to address the safety challenges of CAs constructed with educational big data to support learning. Specifically, we proposed a safety‐aware CA model, benchmarked with two state‐of‐the‐art (SOTA) models, to support high school student learning in an online algebra learning platform. We applied automatic text analysis to evaluate the safety and socio‐emotional support levels of CA‐generated and human‐generated texts. A large dataset was used to train and evaluate the CA models, which consisted of all discussion post‐reply pairs (n = 2,097,139) by 71,918 online math learners from 2015 to 2021. Results show that while SOTA models can generate supportive texts, their safety is compromised. Meanwhile, our proposed model can effectively enhance the safety of generated texts while providing comparable support.
Practitioner notes
What is already known about this topic
Online discussion forums have been plagued by a lack of interaction among students due to factors such as expectations to receive no response and perceptions of topic irrelevance which lead to low motivation to participate.
AI‐based conversational agents can automatically support students' interactions in online discussion forums at a large scale, and their generated responses can be human‐like, contextually coherent and socio‐emotionally supportive.
Unsafe discourse exchanges between students and conversational agents can be dangerous as identity attacks, aggravation and bullying behaviours embedded in discourses can disrupt students' knowledge inquiry and negatively influence student motivation and engagement. However, few educational studies have paid attention to the safety of conversational agents.
What this paper adds
This study proposes and synthesized strategies to build AI‐based conversational agents that automatically support online discussions with safe and supportive discourses.
This study reveals the relationship between discourse safety and social support, suggesting supportive discourses can also be unsafe. |
---|---|
ISSN: | 0007-1013 1467-8535 |
DOI: | 10.1111/bjet.13227 |