Evaluating Neural Language Models as Cognitive Models of Language Acquisition
The success of neural language models (LMs) on many technological tasks has brought about their potential relevance as scientific theories of language despite some clear differences between LM training and child language acquisition. In this paper we argue that some of the most prominent benchmarks...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The success of neural language models (LMs) on many technological tasks has
brought about their potential relevance as scientific theories of language
despite some clear differences between LM training and child language
acquisition. In this paper we argue that some of the most prominent benchmarks
for evaluating the syntactic capacities of LMs may not be sufficiently
rigorous. In particular, we show that the template-based benchmarks lack the
structural diversity commonly found in the theoretical and psychological
studies of language. When trained on small-scale data modeling child language
acquisition, the LMs can be readily matched by simple baseline models. We
advocate for the use of the readily available, carefully curated datasets that
have been evaluated for gradient acceptability by large pools of native
speakers and are designed to probe the structural basis of grammar
specifically. On one such dataset, the LI-Adger dataset, LMs evaluate sentences
in a way inconsistent with human language users. We conclude with suggestions
for better connecting LMs with the empirical study of child language
acquisition. |
---|---|
DOI: | 10.48550/arxiv.2310.20093 |