Evaluating Spatial Understanding of Large Language Models
Large language models (LLMs) show remarkable capabilities across a variety of tasks. Despite the models only seeing text in training, several recent studies suggest that LLM representations implicitly capture aspects of the underlying grounded concepts. Here, we explore LLM representations of a part...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) show remarkable capabilities across a variety of
tasks. Despite the models only seeing text in training, several recent studies
suggest that LLM representations implicitly capture aspects of the underlying
grounded concepts. Here, we explore LLM representations of a particularly
salient kind of grounded knowledge -- spatial relationships. We design
natural-language navigation tasks and evaluate the ability of LLMs, in
particular GPT-3.5-turbo, GPT-4, and Llama2 series models, to represent and
reason about spatial structures. These tasks reveal substantial variability in
LLM performance across different spatial structures, including square,
hexagonal, and triangular grids, rings, and trees. In extensive error analysis,
we find that LLMs' mistakes reflect both spatial and non-spatial factors. These
findings suggest that LLMs appear to capture certain aspects of spatial
structure implicitly, but room for improvement remains. |
---|---|
DOI: | 10.48550/arxiv.2310.14540 |