Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models
Teachers are important to imparting knowledge and guiding learners, and the role of large language models (LLMs) as potential educators is emerging as an important area of study. Recognizing LLMs' capability to generate educational content can lead to advances in automated and personalized lear...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Teachers are important to imparting knowledge and guiding learners, and the
role of large language models (LLMs) as potential educators is emerging as an
important area of study. Recognizing LLMs' capability to generate educational
content can lead to advances in automated and personalized learning. While LLMs
have been tested for their comprehension and problem-solving skills, their
capability in teaching remains largely unexplored. In teaching, questioning is
a key skill that guides students to analyze, evaluate, and synthesize core
concepts and principles. Therefore, our research introduces a benchmark to
evaluate the questioning capability in education as a teacher of LLMs through
evaluating their generated educational questions, utilizing Anderson and
Krathwohl's taxonomy across general, monodisciplinary, and interdisciplinary
domains. We shift the focus from LLMs as learners to LLMs as educators,
assessing their teaching capability through guiding them to generate questions.
We apply four metrics, including relevance, coverage, representativeness, and
consistency, to evaluate the educational quality of LLMs' outputs. Our results
indicate that GPT-4 demonstrates significant potential in teaching general,
humanities, and science courses; Claude2 appears more apt as an
interdisciplinary teacher. Furthermore, the automatic scores align with human
perspectives. |
---|---|
DOI: | 10.48550/arxiv.2408.10947 |