EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models
Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy issues, which means they are unaware of unseen events or generate text with incorrect facts owing to outdated/noisy data. To this end, many knowledge editing approaches for LLMs have emerged -- aiming to subtly inject/edit...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy
issues, which means they are unaware of unseen events or generate text with
incorrect facts owing to outdated/noisy data. To this end, many knowledge
editing approaches for LLMs have emerged -- aiming to subtly inject/edit
updated knowledge or adjust undesired behavior while minimizing the impact on
unrelated inputs. Nevertheless, due to significant differences among various
knowledge editing methods and the variations in task setups, there is no
standard implementation framework available for the community, which hinders
practitioners from applying knowledge editing to applications. To address these
issues, we propose EasyEdit, an easy-to-use knowledge editing framework for
LLMs. It supports various cutting-edge knowledge editing approaches and can be
readily applied to many well-known LLMs such as T5, GPT-J, LlaMA, etc.
Empirically, we report the knowledge editing results on LlaMA-2 with EasyEdit,
demonstrating that knowledge editing surpasses traditional fine-tuning in terms
of reliability and generalization. We have released the source code on GitHub,
along with Google Colab tutorials and comprehensive documentation for beginners
to get started. Besides, we present an online system for real-time knowledge
editing, and a demo video. |
---|---|
DOI: | 10.48550/arxiv.2308.07269 |