Evaluating Morphological Compositional Generalization in Large Language Models
Large language models (LLMs) have demonstrated significant progress in various natural language generation and understanding tasks. However, their linguistic generalization capabilities remain questionable, raising doubts about whether these models learn language similarly to humans. While humans ex...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have demonstrated significant progress in
various natural language generation and understanding tasks. However, their
linguistic generalization capabilities remain questionable, raising doubts
about whether these models learn language similarly to humans. While humans
exhibit compositional generalization and linguistic creativity in language use,
the extent to which LLMs replicate these abilities, particularly in morphology,
is under-explored. In this work, we systematically investigate the
morphological generalization abilities of LLMs through the lens of
compositionality. We define morphemes as compositional primitives and design a
novel suite of generative and discriminative tasks to assess morphological
productivity and systematicity. Focusing on agglutinative languages such as
Turkish and Finnish, we evaluate several state-of-the-art instruction-finetuned
multilingual models, including GPT-4 and Gemini. Our analysis shows that LLMs
struggle with morphological compositional generalization particularly when
applied to novel word roots, with performance declining sharply as
morphological complexity increases. While models can identify individual
morphological combinations better than chance, their performance lacks
systematicity, leading to significant accuracy gaps compared to humans. |
---|---|
DOI: | 10.48550/arxiv.2410.12656 |