Retrieval-augmented Multilingual Knowledge Editing
Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time. Updating knowledge via fine-tuning is computationally resource-hungry and not reliable, and so knowledge editing (KE) has developed as an effective and economical alternative to inj...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge represented in Large Language Models (LLMs) is quite often
incorrect and can also become obsolete over time. Updating knowledge via
fine-tuning is computationally resource-hungry and not reliable, and so
knowledge editing (KE) has developed as an effective and economical alternative
to inject new knowledge or to fix factual errors in LLMs. Although there has
been considerable interest in this area, current KE research exclusively
focuses on the monolingual setting, typically in English. However, what happens
if the new knowledge is supplied in one language, but we would like to query
the LLM in a different language? To address the problem of multilingual
knowledge editing, we propose Retrieval-augmented Multilingual Knowledge Editor
(ReMaKE) to update new knowledge in LLMs. ReMaKE can perform model-agnostic
knowledge editing in multilingual settings. ReMaKE concatenates the new
knowledge retrieved from a multilingual knowledge base with prompts. Our
experimental results show that ReMaKE outperforms baseline knowledge editing
methods by a significant margin and is the first KE method to work in a
multilingual setting. We provide our multilingual knowledge editing dataset
(MzsRE) in 12 languages, which along with code, and additional project
information is available at https://github.com/Vicky-Wil/ReMaKE. |
---|---|
DOI: | 10.48550/arxiv.2312.13040 |