GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion

In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Tongxuan, Wang, Xingyu, Huang, Weizhe, Xu, Wenjiang, Zeng, Yuting, Jiang, Lei, Yang, Hailong, Li, Jing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Tongxuan
Wang, Xingyu
Huang, Weizhe
Xu, Wenjiang
Zeng, Yuting
Jiang, Lei
Yang, Hailong
Li, Jing
description In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context of multi-agent debates, significant performance improvements can be achieved with an increasing number of agents and debate rounds. However, the escalation in the number of agents and debate rounds can drastically raise the tokens cost of debates, thereby limiting the scalability of the multi-agent debate technique. To better harness the advantages of multi-agent debates in logical reasoning tasks, this paper proposes a method to significantly reduce token cost in multi-agent debates. This approach involves dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. Comparative experiments across multiple datasets have demonstrated that this method can reduce the total tokens by up to 51.7% during debates and while potentially enhancing accuracy by as much as 25%. Our method significantly enhances the performance and efficiency of interactions in the multi-agent debate.
doi_str_mv 10.48550/arxiv.2409.14051
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_14051</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_14051</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_140513</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0MTA15GQIci_KLy1wSU1KLEm1UnDNy0jMS87MS1coyUhVcE1Ly0zOTM1LrlTIT1PwLc0pydR1TE_NK1GAqFcILQYpBZug4JJZnFxaXJyZn8fDwJqWmFOcyguluRnk3VxDnD10wbbHFxRl5iYWVcaDXBEPdoUxYRUAMT48uQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion</title><source>arXiv.org</source><creator>Liu, Tongxuan ; Wang, Xingyu ; Huang, Weizhe ; Xu, Wenjiang ; Zeng, Yuting ; Jiang, Lei ; Yang, Hailong ; Li, Jing</creator><creatorcontrib>Liu, Tongxuan ; Wang, Xingyu ; Huang, Weizhe ; Xu, Wenjiang ; Zeng, Yuting ; Jiang, Lei ; Yang, Hailong ; Li, Jing</creatorcontrib><description>In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context of multi-agent debates, significant performance improvements can be achieved with an increasing number of agents and debate rounds. However, the escalation in the number of agents and debate rounds can drastically raise the tokens cost of debates, thereby limiting the scalability of the multi-agent debate technique. To better harness the advantages of multi-agent debates in logical reasoning tasks, this paper proposes a method to significantly reduce token cost in multi-agent debates. This approach involves dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. Comparative experiments across multiple datasets have demonstrated that this method can reduce the total tokens by up to 51.7% during debates and while potentially enhancing accuracy by as much as 25%. Our method significantly enhances the performance and efficiency of interactions in the multi-agent debate.</description><identifier>DOI: 10.48550/arxiv.2409.14051</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.14051$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.14051$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Tongxuan</creatorcontrib><creatorcontrib>Wang, Xingyu</creatorcontrib><creatorcontrib>Huang, Weizhe</creatorcontrib><creatorcontrib>Xu, Wenjiang</creatorcontrib><creatorcontrib>Zeng, Yuting</creatorcontrib><creatorcontrib>Jiang, Lei</creatorcontrib><creatorcontrib>Yang, Hailong</creatorcontrib><creatorcontrib>Li, Jing</creatorcontrib><title>GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion</title><description>In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context of multi-agent debates, significant performance improvements can be achieved with an increasing number of agents and debate rounds. However, the escalation in the number of agents and debate rounds can drastically raise the tokens cost of debates, thereby limiting the scalability of the multi-agent debate technique. To better harness the advantages of multi-agent debates in logical reasoning tasks, this paper proposes a method to significantly reduce token cost in multi-agent debates. This approach involves dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. Comparative experiments across multiple datasets have demonstrated that this method can reduce the total tokens by up to 51.7% during debates and while potentially enhancing accuracy by as much as 25%. Our method significantly enhances the performance and efficiency of interactions in the multi-agent debate.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0MTA15GQIci_KLy1wSU1KLEm1UnDNy0jMS87MS1coyUhVcE1Ly0zOTM1LrlTIT1PwLc0pydR1TE_NK1GAqFcILQYpBZug4JJZnFxaXJyZn8fDwJqWmFOcyguluRnk3VxDnD10wbbHFxRl5iYWVcaDXBEPdoUxYRUAMT48uQ</recordid><startdate>20240921</startdate><enddate>20240921</enddate><creator>Liu, Tongxuan</creator><creator>Wang, Xingyu</creator><creator>Huang, Weizhe</creator><creator>Xu, Wenjiang</creator><creator>Zeng, Yuting</creator><creator>Jiang, Lei</creator><creator>Yang, Hailong</creator><creator>Li, Jing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240921</creationdate><title>GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion</title><author>Liu, Tongxuan ; Wang, Xingyu ; Huang, Weizhe ; Xu, Wenjiang ; Zeng, Yuting ; Jiang, Lei ; Yang, Hailong ; Li, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_140513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Tongxuan</creatorcontrib><creatorcontrib>Wang, Xingyu</creatorcontrib><creatorcontrib>Huang, Weizhe</creatorcontrib><creatorcontrib>Xu, Wenjiang</creatorcontrib><creatorcontrib>Zeng, Yuting</creatorcontrib><creatorcontrib>Jiang, Lei</creatorcontrib><creatorcontrib>Yang, Hailong</creatorcontrib><creatorcontrib>Li, Jing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Tongxuan</au><au>Wang, Xingyu</au><au>Huang, Weizhe</au><au>Xu, Wenjiang</au><au>Zeng, Yuting</au><au>Jiang, Lei</au><au>Yang, Hailong</au><au>Li, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion</atitle><date>2024-09-21</date><risdate>2024</risdate><abstract>In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context of multi-agent debates, significant performance improvements can be achieved with an increasing number of agents and debate rounds. However, the escalation in the number of agents and debate rounds can drastically raise the tokens cost of debates, thereby limiting the scalability of the multi-agent debate technique. To better harness the advantages of multi-agent debates in logical reasoning tasks, this paper proposes a method to significantly reduce token cost in multi-agent debates. This approach involves dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. Comparative experiments across multiple datasets have demonstrated that this method can reduce the total tokens by up to 51.7% during debates and while potentially enhancing accuracy by as much as 25%. Our method significantly enhances the performance and efficiency of interactions in the multi-agent debate.</abstract><doi>10.48550/arxiv.2409.14051</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2409.14051
ispartof
issn
language eng
recordid cdi_arxiv_primary_2409_14051
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T04%3A16%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GroupDebate:%20Enhancing%20the%20Efficiency%20of%20Multi-Agent%20Debate%20Using%20Group%20Discussion&rft.au=Liu,%20Tongxuan&rft.date=2024-09-21&rft_id=info:doi/10.48550/arxiv.2409.14051&rft_dat=%3Carxiv_GOX%3E2409_14051%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true