Beyond Surface Structure: A Causal Assessment of LLMs' Comprehension Ability
Large language models (LLMs) have shown remarkable capability in natural language tasks, yet debate persists on whether they truly comprehend deep structure (i.e., core semantics) or merely rely on surface structure (e.g., presentation format). Prior studies observe that LLMs' performance decli...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have shown remarkable capability in natural
language tasks, yet debate persists on whether they truly comprehend deep
structure (i.e., core semantics) or merely rely on surface structure (e.g.,
presentation format). Prior studies observe that LLMs' performance declines
when intervening on surface structure, arguing their success relies on surface
structure recognition. However, surface structure sensitivity does not prevent
deep structure comprehension. Rigorously evaluating LLMs' capability requires
analyzing both, yet deep structure is often overlooked. To this end, we assess
LLMs' comprehension ability using causal mediation analysis, aiming to fully
discover the capability of using both deep and surface structures.
Specifically, we formulate the comprehension of deep structure as direct causal
effect (DCE) and that of surface structure as indirect causal effect (ICE),
respectively. To address the non-estimability of original DCE and ICE --
stemming from the infeasibility of isolating mutual influences of deep and
surface structures, we develop the corresponding quantifiable surrogates,
including approximated DCE (ADCE) and approximated ICE (AICE). We further apply
the ADCE to evaluate a series of mainstream LLMs, showing that most of them
exhibit deep structure comprehension ability, which grows along with the
prediction accuracy. Comparing ADCE and AICE demonstrates closed-source LLMs
rely more on deep structure, while open-source LLMs are more surface-sensitive,
which decreases with model scale. Theoretically, ADCE is a bidirectional
evaluation, which measures both the sufficiency and necessity of deep structure
changes in causing output variations, thus offering a more comprehensive
assessment than accuracy, a common evaluation in LLMs. Our work provides new
insights into LLMs' deep structure comprehension and offers novel methods for
LLMs evaluation. |
---|---|
DOI: | 10.48550/arxiv.2411.19456 |