GameArena: Evaluating LLM Reasoning through Live Computer Games
Evaluating the reasoning abilities of large language models (LLMs) is challenging. Existing benchmarks often depend on static datasets, which are vulnerable to data contamination and may get saturated over time, or on binary live human feedback that conflates reasoning with other abilities. As the m...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Hu, Lanxiang Li, Qiyu Xie, Anze Jiang, Nan Stoica, Ion Jin, Haojian Zhang, Hao |
description | Evaluating the reasoning abilities of large language models (LLMs) is challenging. Existing benchmarks often depend on static datasets, which are vulnerable to data contamination and may get saturated over time, or on binary live human feedback that conflates reasoning with other abilities. As the most prominent dynamic benchmark, Chatbot Arena evaluates open-ended questions in real-world settings, but lacks the granularity in assessing specific reasoning capabilities. We introduce GameArena, a dynamic benchmark designed to evaluate LLM reasoning capabilities through interactive gameplay with humans. GameArena consists of three games designed to test specific reasoning capabilities (e.g., deductive and inductive reasoning), while keeping participants entertained and engaged. We analyze the gaming data retrospectively to uncover the underlying reasoning processes of LLMs and measure their fine-grained reasoning capabilities. We collect over 2000 game sessions and provide detailed assessments of various reasoning capabilities for five state-of-the-art LLMs. Our user study with 100 participants suggests that GameArena improves user engagement compared to Chatbot Arena. For the first time, GameArena enables the collection of step-by-step LLM reasoning data in the wild. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3142734135</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3142734135</sourcerecordid><originalsourceid>FETCH-proquest_journals_31427341353</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwd0_MTXUsSs1LtFJwLUvMKU0sycxLV_Dx8VUISk0szs8D8UoyivJL0zMUfDLLUhWc83MLSktSixRAOot5GFjTEnOKU3mhNDeDsptriLOHbkFRfmFpanFJfFZ-aVEeUCre2NDEyNzYxNDY1Jg4VQCY-Tfp</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3142734135</pqid></control><display><type>article</type><title>GameArena: Evaluating LLM Reasoning through Live Computer Games</title><source>Free E- Journals</source><creator>Hu, Lanxiang ; Li, Qiyu ; Xie, Anze ; Jiang, Nan ; Stoica, Ion ; Jin, Haojian ; Zhang, Hao</creator><creatorcontrib>Hu, Lanxiang ; Li, Qiyu ; Xie, Anze ; Jiang, Nan ; Stoica, Ion ; Jin, Haojian ; Zhang, Hao</creatorcontrib><description>Evaluating the reasoning abilities of large language models (LLMs) is challenging. Existing benchmarks often depend on static datasets, which are vulnerable to data contamination and may get saturated over time, or on binary live human feedback that conflates reasoning with other abilities. As the most prominent dynamic benchmark, Chatbot Arena evaluates open-ended questions in real-world settings, but lacks the granularity in assessing specific reasoning capabilities. We introduce GameArena, a dynamic benchmark designed to evaluate LLM reasoning capabilities through interactive gameplay with humans. GameArena consists of three games designed to test specific reasoning capabilities (e.g., deductive and inductive reasoning), while keeping participants entertained and engaged. We analyze the gaming data retrospectively to uncover the underlying reasoning processes of LLMs and measure their fine-grained reasoning capabilities. We collect over 2000 game sessions and provide detailed assessments of various reasoning capabilities for five state-of-the-art LLMs. Our user study with 100 participants suggests that GameArena improves user engagement compared to Chatbot Arena. For the first time, GameArena enables the collection of step-by-step LLM reasoning data in the wild.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Chatbots ; Computer & video games ; Evaluation ; Human performance ; Large language models ; Reasoning</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Hu, Lanxiang</creatorcontrib><creatorcontrib>Li, Qiyu</creatorcontrib><creatorcontrib>Xie, Anze</creatorcontrib><creatorcontrib>Jiang, Nan</creatorcontrib><creatorcontrib>Stoica, Ion</creatorcontrib><creatorcontrib>Jin, Haojian</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><title>GameArena: Evaluating LLM Reasoning through Live Computer Games</title><title>arXiv.org</title><description>Evaluating the reasoning abilities of large language models (LLMs) is challenging. Existing benchmarks often depend on static datasets, which are vulnerable to data contamination and may get saturated over time, or on binary live human feedback that conflates reasoning with other abilities. As the most prominent dynamic benchmark, Chatbot Arena evaluates open-ended questions in real-world settings, but lacks the granularity in assessing specific reasoning capabilities. We introduce GameArena, a dynamic benchmark designed to evaluate LLM reasoning capabilities through interactive gameplay with humans. GameArena consists of three games designed to test specific reasoning capabilities (e.g., deductive and inductive reasoning), while keeping participants entertained and engaged. We analyze the gaming data retrospectively to uncover the underlying reasoning processes of LLMs and measure their fine-grained reasoning capabilities. We collect over 2000 game sessions and provide detailed assessments of various reasoning capabilities for five state-of-the-art LLMs. Our user study with 100 participants suggests that GameArena improves user engagement compared to Chatbot Arena. For the first time, GameArena enables the collection of step-by-step LLM reasoning data in the wild.</description><subject>Benchmarks</subject><subject>Chatbots</subject><subject>Computer & video games</subject><subject>Evaluation</subject><subject>Human performance</subject><subject>Large language models</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwd0_MTXUsSs1LtFJwLUvMKU0sycxLV_Dx8VUISk0szs8D8UoyivJL0zMUfDLLUhWc83MLSktSixRAOot5GFjTEnOKU3mhNDeDsptriLOHbkFRfmFpanFJfFZ-aVEeUCre2NDEyNzYxNDY1Jg4VQCY-Tfp</recordid><startdate>20241211</startdate><enddate>20241211</enddate><creator>Hu, Lanxiang</creator><creator>Li, Qiyu</creator><creator>Xie, Anze</creator><creator>Jiang, Nan</creator><creator>Stoica, Ion</creator><creator>Jin, Haojian</creator><creator>Zhang, Hao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241211</creationdate><title>GameArena: Evaluating LLM Reasoning through Live Computer Games</title><author>Hu, Lanxiang ; Li, Qiyu ; Xie, Anze ; Jiang, Nan ; Stoica, Ion ; Jin, Haojian ; Zhang, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31427341353</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Chatbots</topic><topic>Computer & video games</topic><topic>Evaluation</topic><topic>Human performance</topic><topic>Large language models</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hu, Lanxiang</creatorcontrib><creatorcontrib>Li, Qiyu</creatorcontrib><creatorcontrib>Xie, Anze</creatorcontrib><creatorcontrib>Jiang, Nan</creatorcontrib><creatorcontrib>Stoica, Ion</creatorcontrib><creatorcontrib>Jin, Haojian</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Lanxiang</au><au>Li, Qiyu</au><au>Xie, Anze</au><au>Jiang, Nan</au><au>Stoica, Ion</au><au>Jin, Haojian</au><au>Zhang, Hao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>GameArena: Evaluating LLM Reasoning through Live Computer Games</atitle><jtitle>arXiv.org</jtitle><date>2024-12-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Evaluating the reasoning abilities of large language models (LLMs) is challenging. Existing benchmarks often depend on static datasets, which are vulnerable to data contamination and may get saturated over time, or on binary live human feedback that conflates reasoning with other abilities. As the most prominent dynamic benchmark, Chatbot Arena evaluates open-ended questions in real-world settings, but lacks the granularity in assessing specific reasoning capabilities. We introduce GameArena, a dynamic benchmark designed to evaluate LLM reasoning capabilities through interactive gameplay with humans. GameArena consists of three games designed to test specific reasoning capabilities (e.g., deductive and inductive reasoning), while keeping participants entertained and engaged. We analyze the gaming data retrospectively to uncover the underlying reasoning processes of LLMs and measure their fine-grained reasoning capabilities. We collect over 2000 game sessions and provide detailed assessments of various reasoning capabilities for five state-of-the-art LLMs. Our user study with 100 participants suggests that GameArena improves user engagement compared to Chatbot Arena. For the first time, GameArena enables the collection of step-by-step LLM reasoning data in the wild.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3142734135 |
source | Free E- Journals |
subjects | Benchmarks Chatbots Computer & video games Evaluation Human performance Large language models Reasoning |
title | GameArena: Evaluating LLM Reasoning through Live Computer Games |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T18%3A54%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=GameArena:%20Evaluating%20LLM%20Reasoning%20through%20Live%20Computer%20Games&rft.jtitle=arXiv.org&rft.au=Hu,%20Lanxiang&rft.date=2024-12-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3142734135%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3142734135&rft_id=info:pmid/&rfr_iscdi=true |