Symbolic Learning Enables Self-Evolving Agents
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While language agents have demonstrated impressive ca...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhou, Wangchunshu Ou, Yixin Ding, Shengwei Li, Long Wu, Jialong Wang, Tiannan Chen, Jiamin Wang, Shuai Xu, Xiaohua Zhang, Ningyu Chen, Huajun Jiang, Yuchen Eleanor |
description | The AI community has been exploring a pathway to artificial general
intelligence (AGI) by developing "language agents", which are complex large
language models (LLMs) pipelines involving both prompting techniques and tool
usage methods. While language agents have demonstrated impressive capabilities
for many real-world tasks, a fundamental limitation of current language agents
research is that they are model-centric, or engineering-centric. That's to say,
the progress on prompts, tools, and pipelines of language agents requires
substantial manual engineering efforts from human experts rather than
automatically learning from data. We believe the transition from model-centric,
or engineering-centric, to data-centric, i.e., the ability of language agents
to autonomously learn and evolve in environments, is the key for them to
possibly achieve AGI.
In this work, we introduce agent symbolic learning, a systematic framework
that enables language agents to optimize themselves on their own in a
data-centric way using symbolic optimizers. Specifically, we consider agents as
symbolic networks where learnable weights are defined by prompts, tools, and
the way they are stacked together. Agent symbolic learning is designed to
optimize the symbolic network within language agents by mimicking two
fundamental algorithms in connectionist learning: back-propagation and gradient
descent. Instead of dealing with numeric weights, agent symbolic learning works
with natural language simulacrums of weights, loss, and gradients. We conduct
proof-of-concept experiments on both standard benchmarks and complex real-world
tasks and show that agent symbolic learning enables language agents to update
themselves after being created and deployed in the wild, resulting in
"self-evolving agents". |
doi_str_mv | 10.48550/arxiv.2406.18532 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_18532</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_18532</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-bd6d2652e3e36fe4b02f7e6c6a117316f8fe65967a020fd897b3d435d99509bb3</originalsourceid><addsrcrecordid>eNotzrsKwjAYhuEsDqJegJO9gdYcmj_NKFIPUHBo95KYP1KIVVopevfS6vTBO3w8hKwZTdJMSro13bsZEp5SSFgmBZ-TpPzc7SM016hA07VNe4vy1tiAfVRi8HE-PMIw1t0N21e_JDNvQo-r_y5Idcir_SkuLsfzflfEBhSPrQPHQXIUKMBjain3CuEKhjElGPjMI0gNylBOvcu0ssKlQjqtJdXWigXZ_G4ncP3smrvpPvUIrye4-AJopTyn</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Symbolic Learning Enables Self-Evolving Agents</title><source>arXiv.org</source><creator>Zhou, Wangchunshu ; Ou, Yixin ; Ding, Shengwei ; Li, Long ; Wu, Jialong ; Wang, Tiannan ; Chen, Jiamin ; Wang, Shuai ; Xu, Xiaohua ; Zhang, Ningyu ; Chen, Huajun ; Jiang, Yuchen Eleanor</creator><creatorcontrib>Zhou, Wangchunshu ; Ou, Yixin ; Ding, Shengwei ; Li, Long ; Wu, Jialong ; Wang, Tiannan ; Chen, Jiamin ; Wang, Shuai ; Xu, Xiaohua ; Zhang, Ningyu ; Chen, Huajun ; Jiang, Yuchen Eleanor</creatorcontrib><description>The AI community has been exploring a pathway to artificial general
intelligence (AGI) by developing "language agents", which are complex large
language models (LLMs) pipelines involving both prompting techniques and tool
usage methods. While language agents have demonstrated impressive capabilities
for many real-world tasks, a fundamental limitation of current language agents
research is that they are model-centric, or engineering-centric. That's to say,
the progress on prompts, tools, and pipelines of language agents requires
substantial manual engineering efforts from human experts rather than
automatically learning from data. We believe the transition from model-centric,
or engineering-centric, to data-centric, i.e., the ability of language agents
to autonomously learn and evolve in environments, is the key for them to
possibly achieve AGI.
In this work, we introduce agent symbolic learning, a systematic framework
that enables language agents to optimize themselves on their own in a
data-centric way using symbolic optimizers. Specifically, we consider agents as
symbolic networks where learnable weights are defined by prompts, tools, and
the way they are stacked together. Agent symbolic learning is designed to
optimize the symbolic network within language agents by mimicking two
fundamental algorithms in connectionist learning: back-propagation and gradient
descent. Instead of dealing with numeric weights, agent symbolic learning works
with natural language simulacrums of weights, loss, and gradients. We conduct
proof-of-concept experiments on both standard benchmarks and complex real-world
tasks and show that agent symbolic learning enables language agents to update
themselves after being created and deployed in the wild, resulting in
"self-evolving agents".</description><identifier>DOI: 10.48550/arxiv.2406.18532</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.18532$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.18532$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhou, Wangchunshu</creatorcontrib><creatorcontrib>Ou, Yixin</creatorcontrib><creatorcontrib>Ding, Shengwei</creatorcontrib><creatorcontrib>Li, Long</creatorcontrib><creatorcontrib>Wu, Jialong</creatorcontrib><creatorcontrib>Wang, Tiannan</creatorcontrib><creatorcontrib>Chen, Jiamin</creatorcontrib><creatorcontrib>Wang, Shuai</creatorcontrib><creatorcontrib>Xu, Xiaohua</creatorcontrib><creatorcontrib>Zhang, Ningyu</creatorcontrib><creatorcontrib>Chen, Huajun</creatorcontrib><creatorcontrib>Jiang, Yuchen Eleanor</creatorcontrib><title>Symbolic Learning Enables Self-Evolving Agents</title><description>The AI community has been exploring a pathway to artificial general
intelligence (AGI) by developing "language agents", which are complex large
language models (LLMs) pipelines involving both prompting techniques and tool
usage methods. While language agents have demonstrated impressive capabilities
for many real-world tasks, a fundamental limitation of current language agents
research is that they are model-centric, or engineering-centric. That's to say,
the progress on prompts, tools, and pipelines of language agents requires
substantial manual engineering efforts from human experts rather than
automatically learning from data. We believe the transition from model-centric,
or engineering-centric, to data-centric, i.e., the ability of language agents
to autonomously learn and evolve in environments, is the key for them to
possibly achieve AGI.
In this work, we introduce agent symbolic learning, a systematic framework
that enables language agents to optimize themselves on their own in a
data-centric way using symbolic optimizers. Specifically, we consider agents as
symbolic networks where learnable weights are defined by prompts, tools, and
the way they are stacked together. Agent symbolic learning is designed to
optimize the symbolic network within language agents by mimicking two
fundamental algorithms in connectionist learning: back-propagation and gradient
descent. Instead of dealing with numeric weights, agent symbolic learning works
with natural language simulacrums of weights, loss, and gradients. We conduct
proof-of-concept experiments on both standard benchmarks and complex real-world
tasks and show that agent symbolic learning enables language agents to update
themselves after being created and deployed in the wild, resulting in
"self-evolving agents".</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAYhuEsDqJegJO9gdYcmj_NKFIPUHBo95KYP1KIVVopevfS6vTBO3w8hKwZTdJMSro13bsZEp5SSFgmBZ-TpPzc7SM016hA07VNe4vy1tiAfVRi8HE-PMIw1t0N21e_JDNvQo-r_y5Idcir_SkuLsfzflfEBhSPrQPHQXIUKMBjain3CuEKhjElGPjMI0gNylBOvcu0ssKlQjqtJdXWigXZ_G4ncP3smrvpPvUIrye4-AJopTyn</recordid><startdate>20240626</startdate><enddate>20240626</enddate><creator>Zhou, Wangchunshu</creator><creator>Ou, Yixin</creator><creator>Ding, Shengwei</creator><creator>Li, Long</creator><creator>Wu, Jialong</creator><creator>Wang, Tiannan</creator><creator>Chen, Jiamin</creator><creator>Wang, Shuai</creator><creator>Xu, Xiaohua</creator><creator>Zhang, Ningyu</creator><creator>Chen, Huajun</creator><creator>Jiang, Yuchen Eleanor</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240626</creationdate><title>Symbolic Learning Enables Self-Evolving Agents</title><author>Zhou, Wangchunshu ; Ou, Yixin ; Ding, Shengwei ; Li, Long ; Wu, Jialong ; Wang, Tiannan ; Chen, Jiamin ; Wang, Shuai ; Xu, Xiaohua ; Zhang, Ningyu ; Chen, Huajun ; Jiang, Yuchen Eleanor</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-bd6d2652e3e36fe4b02f7e6c6a117316f8fe65967a020fd897b3d435d99509bb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Wangchunshu</creatorcontrib><creatorcontrib>Ou, Yixin</creatorcontrib><creatorcontrib>Ding, Shengwei</creatorcontrib><creatorcontrib>Li, Long</creatorcontrib><creatorcontrib>Wu, Jialong</creatorcontrib><creatorcontrib>Wang, Tiannan</creatorcontrib><creatorcontrib>Chen, Jiamin</creatorcontrib><creatorcontrib>Wang, Shuai</creatorcontrib><creatorcontrib>Xu, Xiaohua</creatorcontrib><creatorcontrib>Zhang, Ningyu</creatorcontrib><creatorcontrib>Chen, Huajun</creatorcontrib><creatorcontrib>Jiang, Yuchen Eleanor</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhou, Wangchunshu</au><au>Ou, Yixin</au><au>Ding, Shengwei</au><au>Li, Long</au><au>Wu, Jialong</au><au>Wang, Tiannan</au><au>Chen, Jiamin</au><au>Wang, Shuai</au><au>Xu, Xiaohua</au><au>Zhang, Ningyu</au><au>Chen, Huajun</au><au>Jiang, Yuchen Eleanor</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Symbolic Learning Enables Self-Evolving Agents</atitle><date>2024-06-26</date><risdate>2024</risdate><abstract>The AI community has been exploring a pathway to artificial general
intelligence (AGI) by developing "language agents", which are complex large
language models (LLMs) pipelines involving both prompting techniques and tool
usage methods. While language agents have demonstrated impressive capabilities
for many real-world tasks, a fundamental limitation of current language agents
research is that they are model-centric, or engineering-centric. That's to say,
the progress on prompts, tools, and pipelines of language agents requires
substantial manual engineering efforts from human experts rather than
automatically learning from data. We believe the transition from model-centric,
or engineering-centric, to data-centric, i.e., the ability of language agents
to autonomously learn and evolve in environments, is the key for them to
possibly achieve AGI.
In this work, we introduce agent symbolic learning, a systematic framework
that enables language agents to optimize themselves on their own in a
data-centric way using symbolic optimizers. Specifically, we consider agents as
symbolic networks where learnable weights are defined by prompts, tools, and
the way they are stacked together. Agent symbolic learning is designed to
optimize the symbolic network within language agents by mimicking two
fundamental algorithms in connectionist learning: back-propagation and gradient
descent. Instead of dealing with numeric weights, agent symbolic learning works
with natural language simulacrums of weights, loss, and gradients. We conduct
proof-of-concept experiments on both standard benchmarks and complex real-world
tasks and show that agent symbolic learning enables language agents to update
themselves after being created and deployed in the wild, resulting in
"self-evolving agents".</abstract><doi>10.48550/arxiv.2406.18532</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2406.18532 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2406_18532 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Learning |
title | Symbolic Learning Enables Self-Evolving Agents |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T14%3A18%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Symbolic%20Learning%20Enables%20Self-Evolving%20Agents&rft.au=Zhou,%20Wangchunshu&rft.date=2024-06-26&rft_id=info:doi/10.48550/arxiv.2406.18532&rft_dat=%3Carxiv_GOX%3E2406_18532%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |