LearnLM: Improving Gemini for Learning
Today's generative AI systems are tuned to present information by default rather than engage users in service of learning as a human tutor would. To address the wide range of potential education use cases for these systems, we reframe the challenge of injecting pedagogical behavior as one of \t...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Today's generative AI systems are tuned to present information by default
rather than engage users in service of learning as a human tutor would. To
address the wide range of potential education use cases for these systems, we
reframe the challenge of injecting pedagogical behavior as one of
\textit{pedagogical instruction following}, where training and evaluation
examples include system-level instructions describing the specific pedagogy
attributes present or desired in subsequent model turns. This framing avoids
committing our models to any particular definition of pedagogy, and instead
allows teachers or developers to specify desired model behavior. It also clears
a path to improving Gemini models for learning -- by enabling the addition of
our pedagogical data to post-training mixtures -- alongside their rapidly
expanding set of capabilities. Both represent important changes from our
initial tech report. We show how training with pedagogical instruction
following produces a LearnLM model (available on Google AI Studio) that is
preferred substantially by expert raters across a diverse set of learning
scenarios, with average preference strengths of 31\% over GPT-4o, 11\% over
Claude 3.5, and 13\% over the Gemini 1.5 Pro model LearnLM was based on. |
---|---|
DOI: | 10.48550/arxiv.2412.16429 |