Editing Conceptual Knowledge for Large Language Models
Recently, there has been a growing interest in knowledge editing for Large Language Models (LLMs). Current approaches and evaluations merely explore the instance-level editing, while whether LLMs possess the capability to modify concepts remains unclear. This paper pioneers the investigation of edit...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, there has been a growing interest in knowledge editing for Large
Language Models (LLMs). Current approaches and evaluations merely explore the
instance-level editing, while whether LLMs possess the capability to modify
concepts remains unclear. This paper pioneers the investigation of editing
conceptual knowledge for LLMs, by constructing a novel benchmark dataset
ConceptEdit and establishing a suite of new metrics for evaluation. The
experimental results reveal that, although existing editing methods can
efficiently modify concept-level definition to some extent, they also have the
potential to distort the related instantial knowledge in LLMs, leading to poor
performance. We anticipate this can inspire further progress in better
understanding LLMs. Our project homepage is available at
https://zjunlp.github.io/project/ConceptEdit. |
---|---|
DOI: | 10.48550/arxiv.2403.06259 |