A Comprehensive Study of Knowledge Editing for Large Language Models
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication. However, a primary limitation lies in the significant computational demands during training, arising from their extensive parameterization. This challenge...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhang, Ningyu Yao, Yunzhi Tian, Bozhong Wang, Peng Deng, Shumin Wang, Mengru Xi, Zekun Mao, Shengyu Zhang, Jintian Ni, Yuansheng Cheng, Siyuan Xu, Ziwen Xu, Xin Gu, Jia-Chen Jiang, Yong Xie, Pengjun Huang, Fei Liang, Lei Zhang, Zhiqiang Zhu, Xiaowei Zhou, Jun Chen, Huajun |
description | Large Language Models (LLMs) have shown extraordinary capabilities in
understanding and generating text that closely mirrors human communication.
However, a primary limitation lies in the significant computational demands
during training, arising from their extensive parameterization. This challenge
is further intensified by the dynamic nature of the world, necessitating
frequent updates to LLMs to correct outdated information or integrate new
knowledge, thereby ensuring their continued relevance. Note that many
applications demand continual model adjustments post-training to address
deficiencies or undesirable behaviors. There is an increasing interest in
efficient, lightweight methods for on-the-fly model modifications. To this end,
recent years have seen a burgeoning in the techniques of knowledge editing for
LLMs, which aim to efficiently modify LLMs' behaviors within specific domains
while preserving overall performance across various inputs. In this paper, we
first define the knowledge editing problem and then provide a comprehensive
review of cutting-edge approaches. Drawing inspiration from educational and
cognitive research theories, we propose a unified categorization criterion that
classifies knowledge editing methods into three groups: resorting to external
knowledge, merging knowledge into the model, and editing intrinsic knowledge.
Furthermore, we introduce a new benchmark, KnowEdit, for a comprehensive
empirical evaluation of representative knowledge editing approaches.
Additionally, we provide an in-depth analysis of knowledge location, which can
give a deeper understanding of the knowledge structures inherent within LLMs.
Finally, we discuss several potential applications of knowledge editing,
outlining its broad and impactful implications. |
doi_str_mv | 10.48550/arxiv.2401.01286 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_01286</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_01286</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-16f2f7fb8a0a3c5a1e5b23da4f5854efc127cbd2147b24ff6bb0e9d18aa69e713</originalsourceid><addsrcrecordid>eNotz7tOwzAYBWAvDKjlAZjwCyTYjm8dq1AuIoiB7tHv-HdqKY0rpy307Sml0zk6w5E-Qu45K6VVij1C_onHUkjGS8aF1bfkaUnrtN1l3OA4xSPSr_3Bn2gK9H1M3wP6HunKx30cexpSpg3k89LA2B_gXD6Sx2Gak5sAw4R315yR9fNqXb8WzefLW71sCtBGF1wHEUxwFhhUnQKOyonKgwzKKomh48J0zgsujRMyBO0cw4XnFkAv0PBqRh7-by-MdpfjFvKp_eO0F071CyT6RWc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Comprehensive Study of Knowledge Editing for Large Language Models</title><source>arXiv.org</source><creator>Zhang, Ningyu ; Yao, Yunzhi ; Tian, Bozhong ; Wang, Peng ; Deng, Shumin ; Wang, Mengru ; Xi, Zekun ; Mao, Shengyu ; Zhang, Jintian ; Ni, Yuansheng ; Cheng, Siyuan ; Xu, Ziwen ; Xu, Xin ; Gu, Jia-Chen ; Jiang, Yong ; Xie, Pengjun ; Huang, Fei ; Liang, Lei ; Zhang, Zhiqiang ; Zhu, Xiaowei ; Zhou, Jun ; Chen, Huajun</creator><creatorcontrib>Zhang, Ningyu ; Yao, Yunzhi ; Tian, Bozhong ; Wang, Peng ; Deng, Shumin ; Wang, Mengru ; Xi, Zekun ; Mao, Shengyu ; Zhang, Jintian ; Ni, Yuansheng ; Cheng, Siyuan ; Xu, Ziwen ; Xu, Xin ; Gu, Jia-Chen ; Jiang, Yong ; Xie, Pengjun ; Huang, Fei ; Liang, Lei ; Zhang, Zhiqiang ; Zhu, Xiaowei ; Zhou, Jun ; Chen, Huajun</creatorcontrib><description>Large Language Models (LLMs) have shown extraordinary capabilities in
understanding and generating text that closely mirrors human communication.
However, a primary limitation lies in the significant computational demands
during training, arising from their extensive parameterization. This challenge
is further intensified by the dynamic nature of the world, necessitating
frequent updates to LLMs to correct outdated information or integrate new
knowledge, thereby ensuring their continued relevance. Note that many
applications demand continual model adjustments post-training to address
deficiencies or undesirable behaviors. There is an increasing interest in
efficient, lightweight methods for on-the-fly model modifications. To this end,
recent years have seen a burgeoning in the techniques of knowledge editing for
LLMs, which aim to efficiently modify LLMs' behaviors within specific domains
while preserving overall performance across various inputs. In this paper, we
first define the knowledge editing problem and then provide a comprehensive
review of cutting-edge approaches. Drawing inspiration from educational and
cognitive research theories, we propose a unified categorization criterion that
classifies knowledge editing methods into three groups: resorting to external
knowledge, merging knowledge into the model, and editing intrinsic knowledge.
Furthermore, we introduce a new benchmark, KnowEdit, for a comprehensive
empirical evaluation of representative knowledge editing approaches.
Additionally, we provide an in-depth analysis of knowledge location, which can
give a deeper understanding of the knowledge structures inherent within LLMs.
Finally, we discuss several potential applications of knowledge editing,
outlining its broad and impactful implications.</description><identifier>DOI: 10.48550/arxiv.2401.01286</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Human-Computer Interaction ; Computer Science - Learning</subject><creationdate>2024-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.01286$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.01286$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Ningyu</creatorcontrib><creatorcontrib>Yao, Yunzhi</creatorcontrib><creatorcontrib>Tian, Bozhong</creatorcontrib><creatorcontrib>Wang, Peng</creatorcontrib><creatorcontrib>Deng, Shumin</creatorcontrib><creatorcontrib>Wang, Mengru</creatorcontrib><creatorcontrib>Xi, Zekun</creatorcontrib><creatorcontrib>Mao, Shengyu</creatorcontrib><creatorcontrib>Zhang, Jintian</creatorcontrib><creatorcontrib>Ni, Yuansheng</creatorcontrib><creatorcontrib>Cheng, Siyuan</creatorcontrib><creatorcontrib>Xu, Ziwen</creatorcontrib><creatorcontrib>Xu, Xin</creatorcontrib><creatorcontrib>Gu, Jia-Chen</creatorcontrib><creatorcontrib>Jiang, Yong</creatorcontrib><creatorcontrib>Xie, Pengjun</creatorcontrib><creatorcontrib>Huang, Fei</creatorcontrib><creatorcontrib>Liang, Lei</creatorcontrib><creatorcontrib>Zhang, Zhiqiang</creatorcontrib><creatorcontrib>Zhu, Xiaowei</creatorcontrib><creatorcontrib>Zhou, Jun</creatorcontrib><creatorcontrib>Chen, Huajun</creatorcontrib><title>A Comprehensive Study of Knowledge Editing for Large Language Models</title><description>Large Language Models (LLMs) have shown extraordinary capabilities in
understanding and generating text that closely mirrors human communication.
However, a primary limitation lies in the significant computational demands
during training, arising from their extensive parameterization. This challenge
is further intensified by the dynamic nature of the world, necessitating
frequent updates to LLMs to correct outdated information or integrate new
knowledge, thereby ensuring their continued relevance. Note that many
applications demand continual model adjustments post-training to address
deficiencies or undesirable behaviors. There is an increasing interest in
efficient, lightweight methods for on-the-fly model modifications. To this end,
recent years have seen a burgeoning in the techniques of knowledge editing for
LLMs, which aim to efficiently modify LLMs' behaviors within specific domains
while preserving overall performance across various inputs. In this paper, we
first define the knowledge editing problem and then provide a comprehensive
review of cutting-edge approaches. Drawing inspiration from educational and
cognitive research theories, we propose a unified categorization criterion that
classifies knowledge editing methods into three groups: resorting to external
knowledge, merging knowledge into the model, and editing intrinsic knowledge.
Furthermore, we introduce a new benchmark, KnowEdit, for a comprehensive
empirical evaluation of representative knowledge editing approaches.
Additionally, we provide an in-depth analysis of knowledge location, which can
give a deeper understanding of the knowledge structures inherent within LLMs.
Finally, we discuss several potential applications of knowledge editing,
outlining its broad and impactful implications.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOwzAYBWAvDKjlAZjwCyTYjm8dq1AuIoiB7tHv-HdqKY0rpy307Sml0zk6w5E-Qu45K6VVij1C_onHUkjGS8aF1bfkaUnrtN1l3OA4xSPSr_3Bn2gK9H1M3wP6HunKx30cexpSpg3k89LA2B_gXD6Sx2Gak5sAw4R315yR9fNqXb8WzefLW71sCtBGF1wHEUxwFhhUnQKOyonKgwzKKomh48J0zgsujRMyBO0cw4XnFkAv0PBqRh7-by-MdpfjFvKp_eO0F071CyT6RWc</recordid><startdate>20240102</startdate><enddate>20240102</enddate><creator>Zhang, Ningyu</creator><creator>Yao, Yunzhi</creator><creator>Tian, Bozhong</creator><creator>Wang, Peng</creator><creator>Deng, Shumin</creator><creator>Wang, Mengru</creator><creator>Xi, Zekun</creator><creator>Mao, Shengyu</creator><creator>Zhang, Jintian</creator><creator>Ni, Yuansheng</creator><creator>Cheng, Siyuan</creator><creator>Xu, Ziwen</creator><creator>Xu, Xin</creator><creator>Gu, Jia-Chen</creator><creator>Jiang, Yong</creator><creator>Xie, Pengjun</creator><creator>Huang, Fei</creator><creator>Liang, Lei</creator><creator>Zhang, Zhiqiang</creator><creator>Zhu, Xiaowei</creator><creator>Zhou, Jun</creator><creator>Chen, Huajun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240102</creationdate><title>A Comprehensive Study of Knowledge Editing for Large Language Models</title><author>Zhang, Ningyu ; Yao, Yunzhi ; Tian, Bozhong ; Wang, Peng ; Deng, Shumin ; Wang, Mengru ; Xi, Zekun ; Mao, Shengyu ; Zhang, Jintian ; Ni, Yuansheng ; Cheng, Siyuan ; Xu, Ziwen ; Xu, Xin ; Gu, Jia-Chen ; Jiang, Yong ; Xie, Pengjun ; Huang, Fei ; Liang, Lei ; Zhang, Zhiqiang ; Zhu, Xiaowei ; Zhou, Jun ; Chen, Huajun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-16f2f7fb8a0a3c5a1e5b23da4f5854efc127cbd2147b24ff6bb0e9d18aa69e713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Ningyu</creatorcontrib><creatorcontrib>Yao, Yunzhi</creatorcontrib><creatorcontrib>Tian, Bozhong</creatorcontrib><creatorcontrib>Wang, Peng</creatorcontrib><creatorcontrib>Deng, Shumin</creatorcontrib><creatorcontrib>Wang, Mengru</creatorcontrib><creatorcontrib>Xi, Zekun</creatorcontrib><creatorcontrib>Mao, Shengyu</creatorcontrib><creatorcontrib>Zhang, Jintian</creatorcontrib><creatorcontrib>Ni, Yuansheng</creatorcontrib><creatorcontrib>Cheng, Siyuan</creatorcontrib><creatorcontrib>Xu, Ziwen</creatorcontrib><creatorcontrib>Xu, Xin</creatorcontrib><creatorcontrib>Gu, Jia-Chen</creatorcontrib><creatorcontrib>Jiang, Yong</creatorcontrib><creatorcontrib>Xie, Pengjun</creatorcontrib><creatorcontrib>Huang, Fei</creatorcontrib><creatorcontrib>Liang, Lei</creatorcontrib><creatorcontrib>Zhang, Zhiqiang</creatorcontrib><creatorcontrib>Zhu, Xiaowei</creatorcontrib><creatorcontrib>Zhou, Jun</creatorcontrib><creatorcontrib>Chen, Huajun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Ningyu</au><au>Yao, Yunzhi</au><au>Tian, Bozhong</au><au>Wang, Peng</au><au>Deng, Shumin</au><au>Wang, Mengru</au><au>Xi, Zekun</au><au>Mao, Shengyu</au><au>Zhang, Jintian</au><au>Ni, Yuansheng</au><au>Cheng, Siyuan</au><au>Xu, Ziwen</au><au>Xu, Xin</au><au>Gu, Jia-Chen</au><au>Jiang, Yong</au><au>Xie, Pengjun</au><au>Huang, Fei</au><au>Liang, Lei</au><au>Zhang, Zhiqiang</au><au>Zhu, Xiaowei</au><au>Zhou, Jun</au><au>Chen, Huajun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Comprehensive Study of Knowledge Editing for Large Language Models</atitle><date>2024-01-02</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have shown extraordinary capabilities in
understanding and generating text that closely mirrors human communication.
However, a primary limitation lies in the significant computational demands
during training, arising from their extensive parameterization. This challenge
is further intensified by the dynamic nature of the world, necessitating
frequent updates to LLMs to correct outdated information or integrate new
knowledge, thereby ensuring their continued relevance. Note that many
applications demand continual model adjustments post-training to address
deficiencies or undesirable behaviors. There is an increasing interest in
efficient, lightweight methods for on-the-fly model modifications. To this end,
recent years have seen a burgeoning in the techniques of knowledge editing for
LLMs, which aim to efficiently modify LLMs' behaviors within specific domains
while preserving overall performance across various inputs. In this paper, we
first define the knowledge editing problem and then provide a comprehensive
review of cutting-edge approaches. Drawing inspiration from educational and
cognitive research theories, we propose a unified categorization criterion that
classifies knowledge editing methods into three groups: resorting to external
knowledge, merging knowledge into the model, and editing intrinsic knowledge.
Furthermore, we introduce a new benchmark, KnowEdit, for a comprehensive
empirical evaluation of representative knowledge editing approaches.
Additionally, we provide an in-depth analysis of knowledge location, which can
give a deeper understanding of the knowledge structures inherent within LLMs.
Finally, we discuss several potential applications of knowledge editing,
outlining its broad and impactful implications.</abstract><doi>10.48550/arxiv.2401.01286</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2401.01286 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2401_01286 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition Computer Science - Human-Computer Interaction Computer Science - Learning |
title | A Comprehensive Study of Knowledge Editing for Large Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T19%3A00%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Comprehensive%20Study%20of%20Knowledge%20Editing%20for%20Large%20Language%20Models&rft.au=Zhang,%20Ningyu&rft.date=2024-01-02&rft_id=info:doi/10.48550/arxiv.2401.01286&rft_dat=%3Carxiv_GOX%3E2401_01286%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |