A Survey on Knowledge Editing of Neural Networks

Deep neural networks are becoming increasingly pervasive in academia and industry, matching and surpassing human performance in a wide variety of fields and related tasks. However, just as humans, even the largest artificial neural networks (ANNs) make mistakes, and once-correct predictions can beco...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-11, p.1-17
Hauptverfasser: Mazzia, Vittorio, Pedrani, Alessandro, Caciolai, Andrea, Rottmann, Kay, Bernardi, Davide
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep neural networks are becoming increasingly pervasive in academia and industry, matching and surpassing human performance in a wide variety of fields and related tasks. However, just as humans, even the largest artificial neural networks (ANNs) make mistakes, and once-correct predictions can become invalid as the world progresses in time. Augmenting datasets with samples that account for mistakes or up-to-date information has become a common workaround in practical applications. However, the well-known phenomenon of catastrophic forgetting poses a challenge in achieving precise changes in the implicitly memorized knowledge of neural network parameters, often requiring a full model retraining to achieve desired behaviors. That is expensive, unreliable, and incompatible with the current trend of large self-supervised pretraining, making it necessary to find more efficient and effective methods for adapting neural network models to changing data. To address this need, knowledge editing (KE) is emerging as a novel area of research that aims to enable reliable, data-efficient, and fast changes to a pretrained target model, without affecting model behaviors on previously learned tasks. In this survey, we provide a brief review of this recent artificial intelligence field of research. We first introduce the problem of editing neural networks, formalize it in a common framework and differentiate it from more notorious branches of research such as continuous learning. Next, we provide a review of the most relevant KE approaches and datasets proposed so far, grouping works under four different families: regularization techniques, meta-learning, direct model editing, and architectural strategies. Finally, we outline some intersections with other fields of research and potential directions for future works.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2024.3498935