How Well Can Knowledge Edit Methods Edit Perplexing Knowledge?
Large language models (LLMs) have demonstrated remarkable capabilities, but updating their knowledge post-training remains a critical challenge. While recent model editing techniques like Rank-One Model Editing (ROME) show promise, their effectiveness may vary based on the nature of the knowledge be...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have demonstrated remarkable capabilities, but
updating their knowledge post-training remains a critical challenge. While
recent model editing techniques like Rank-One Model Editing (ROME) show
promise, their effectiveness may vary based on the nature of the knowledge
being edited. We introduce the concept of ``perplexingness'': the degree to
which new knowledge conflicts with an LLM's learned conceptual hierarchies and
categorical relationships. For instance, editing ``British Shorthair is a kind
of cat'' to ``British Shorthair is a kind of dog'' represents a
low-perplexingness edit within the same taxonomic level, while editing ``A cat
is a kind of animal'' to ``A cat is a kind of plant'' represents a
high-perplexingness edit that violates fundamental categorical boundaries. To
systematically investigate this phenomenon, we introduce HierarchyData, a
carefully curated dataset of 99 hyponym-hypernym pairs across diverse
categories. Through controlled experiments across three models and four editing
methods, we demonstrate a strong negative correlation between the
perplexingness of new knowledge and the effectiveness of knowledge editing. Our
analysis reveals that edits involving more abstract concepts (hypernyms)
generally exhibit higher perplexingness and are more resistant to modification
than their specific counterparts (hyponyms). These findings highlight a
fundamental challenge in LLM knowledge editing: the more a new fact contradicts
an LLM's learned conceptual hierarchies, the harder it becomes to reliably
encode that knowledge. |
---|---|
DOI: | 10.48550/arxiv.2406.17253 |