Memorization Over Reasoning? Exposing and Mitigating Verbatim Memorization in Large Language Models' Character Understanding Evaluation
Recently, Large Language Models (LLMs) have shown impressive performance in character understanding tasks, such as analyzing the roles, personalities, and relationships of fictional characters. However, the extensive pre-training corpora used by LLMs raise concerns that they may rely on memorizing p...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, Large Language Models (LLMs) have shown impressive performance in
character understanding tasks, such as analyzing the roles, personalities, and
relationships of fictional characters. However, the extensive pre-training
corpora used by LLMs raise concerns that they may rely on memorizing popular
fictional works rather than genuinely understanding and reasoning about them.
In this work, we argue that 'gist memory'-capturing essential meaning - should
be the primary mechanism for character understanding tasks, as opposed to
'verbatim memory' - exact match of a string. We introduce a simple yet
effective method to mitigate mechanized memorization in character understanding
evaluations while preserving the essential implicit cues needed for
comprehension and reasoning. Our approach reduces memorization-driven
performance on popular fictional works from 96% accuracy to 72% and results in
up to an 18% drop in accuracy across various character understanding tasks.
These findings underscore the issue of data contamination in existing
benchmarks, which often measure memorization rather than true character
understanding. |
---|---|
DOI: | 10.48550/arxiv.2412.14368 |