VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks
Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively s...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-01 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Jing Yu Koh Lo, Robert Jang, Lawrence Duvvur, Vikram Ming Chong Lim Po-Yu, Huang Neubig, Graham Zhou, Shuyan Salakhutdinov, Ruslan Fried, Daniel |
description | Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that most computer interfaces cater to human perception, visual information often augments textual data in ways that text-only models struggle to harness effectively. To bridge this gap, we introduce VisualWebArena, a benchmark designed to assess the performance of multimodal web agents on realistic \textit{visually grounded tasks}. VisualWebArena comprises of a set of diverse and complex web-based tasks that evaluate various capabilities of autonomous multimodal agents. To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives. We conduct an extensive evaluation of state-of-the-art LLM-based autonomous agents, including several multimodal models. Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents. VisualWebArena provides a framework for evaluating multimodal autonomous language agents, and offers insights towards building stronger autonomous agents for the web. Our code, baseline models, and data is publicly available at https://jykoh.com/vwa. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2918404996</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918404996</sourcerecordid><originalsourceid>FETCH-proquest_journals_29184049963</originalsourceid><addsrcrecordid>eNqNi0sKwjAUAIMgWLR3eOC6kKYfW3dFKiK4EdFleWosqTHRvsTzW9ADuJrFzIxYIJIkjopUiAkLiTrOucgXIsuSgG2PijzqkzxXvTS4hPqN2qNTpoWd10497BU1VK00jsAa2EvUipy6wPeEYYUD0p1mbHxDTTL8ccrm6_qw2kTP3r68JNd01vdmUI0o4yLlaVnmyX_VB66sPIk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918404996</pqid></control><display><type>article</type><title>VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks</title><source>Free E- Journals</source><creator>Jing Yu Koh ; Lo, Robert ; Jang, Lawrence ; Duvvur, Vikram ; Ming Chong Lim ; Po-Yu, Huang ; Neubig, Graham ; Zhou, Shuyan ; Salakhutdinov, Ruslan ; Fried, Daniel</creator><creatorcontrib>Jing Yu Koh ; Lo, Robert ; Jang, Lawrence ; Duvvur, Vikram ; Ming Chong Lim ; Po-Yu, Huang ; Neubig, Graham ; Zhou, Shuyan ; Salakhutdinov, Ruslan ; Fried, Daniel</creatorcontrib><description>Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that most computer interfaces cater to human perception, visual information often augments textual data in ways that text-only models struggle to harness effectively. To bridge this gap, we introduce VisualWebArena, a benchmark designed to assess the performance of multimodal web agents on realistic \textit{visually grounded tasks}. VisualWebArena comprises of a set of diverse and complex web-based tasks that evaluate various capabilities of autonomous multimodal agents. To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives. We conduct an extensive evaluation of state-of-the-art LLM-based autonomous agents, including several multimodal models. Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents. VisualWebArena provides a framework for evaluating multimodal autonomous language agents, and offers insights towards building stronger autonomous agents for the web. Our code, baseline models, and data is publicly available at https://jykoh.com/vwa.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Natural language processing ; Qualitative analysis ; State-of-the-art reviews ; Task complexity ; Visual tasks</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Jing Yu Koh</creatorcontrib><creatorcontrib>Lo, Robert</creatorcontrib><creatorcontrib>Jang, Lawrence</creatorcontrib><creatorcontrib>Duvvur, Vikram</creatorcontrib><creatorcontrib>Ming Chong Lim</creatorcontrib><creatorcontrib>Po-Yu, Huang</creatorcontrib><creatorcontrib>Neubig, Graham</creatorcontrib><creatorcontrib>Zhou, Shuyan</creatorcontrib><creatorcontrib>Salakhutdinov, Ruslan</creatorcontrib><creatorcontrib>Fried, Daniel</creatorcontrib><title>VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks</title><title>arXiv.org</title><description>Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that most computer interfaces cater to human perception, visual information often augments textual data in ways that text-only models struggle to harness effectively. To bridge this gap, we introduce VisualWebArena, a benchmark designed to assess the performance of multimodal web agents on realistic \textit{visually grounded tasks}. VisualWebArena comprises of a set of diverse and complex web-based tasks that evaluate various capabilities of autonomous multimodal agents. To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives. We conduct an extensive evaluation of state-of-the-art LLM-based autonomous agents, including several multimodal models. Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents. VisualWebArena provides a framework for evaluating multimodal autonomous language agents, and offers insights towards building stronger autonomous agents for the web. Our code, baseline models, and data is publicly available at https://jykoh.com/vwa.</description><subject>Benchmarks</subject><subject>Natural language processing</subject><subject>Qualitative analysis</subject><subject>State-of-the-art reviews</subject><subject>Task complexity</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi0sKwjAUAIMgWLR3eOC6kKYfW3dFKiK4EdFleWosqTHRvsTzW9ADuJrFzIxYIJIkjopUiAkLiTrOucgXIsuSgG2PijzqkzxXvTS4hPqN2qNTpoWd10497BU1VK00jsAa2EvUipy6wPeEYYUD0p1mbHxDTTL8ccrm6_qw2kTP3r68JNd01vdmUI0o4yLlaVnmyX_VB66sPIk</recordid><startdate>20240124</startdate><enddate>20240124</enddate><creator>Jing Yu Koh</creator><creator>Lo, Robert</creator><creator>Jang, Lawrence</creator><creator>Duvvur, Vikram</creator><creator>Ming Chong Lim</creator><creator>Po-Yu, Huang</creator><creator>Neubig, Graham</creator><creator>Zhou, Shuyan</creator><creator>Salakhutdinov, Ruslan</creator><creator>Fried, Daniel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240124</creationdate><title>VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks</title><author>Jing Yu Koh ; Lo, Robert ; Jang, Lawrence ; Duvvur, Vikram ; Ming Chong Lim ; Po-Yu, Huang ; Neubig, Graham ; Zhou, Shuyan ; Salakhutdinov, Ruslan ; Fried, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29184049963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Natural language processing</topic><topic>Qualitative analysis</topic><topic>State-of-the-art reviews</topic><topic>Task complexity</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Jing Yu Koh</creatorcontrib><creatorcontrib>Lo, Robert</creatorcontrib><creatorcontrib>Jang, Lawrence</creatorcontrib><creatorcontrib>Duvvur, Vikram</creatorcontrib><creatorcontrib>Ming Chong Lim</creatorcontrib><creatorcontrib>Po-Yu, Huang</creatorcontrib><creatorcontrib>Neubig, Graham</creatorcontrib><creatorcontrib>Zhou, Shuyan</creatorcontrib><creatorcontrib>Salakhutdinov, Ruslan</creatorcontrib><creatorcontrib>Fried, Daniel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jing Yu Koh</au><au>Lo, Robert</au><au>Jang, Lawrence</au><au>Duvvur, Vikram</au><au>Ming Chong Lim</au><au>Po-Yu, Huang</au><au>Neubig, Graham</au><au>Zhou, Shuyan</au><au>Salakhutdinov, Ruslan</au><au>Fried, Daniel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks</atitle><jtitle>arXiv.org</jtitle><date>2024-01-24</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that most computer interfaces cater to human perception, visual information often augments textual data in ways that text-only models struggle to harness effectively. To bridge this gap, we introduce VisualWebArena, a benchmark designed to assess the performance of multimodal web agents on realistic \textit{visually grounded tasks}. VisualWebArena comprises of a set of diverse and complex web-based tasks that evaluate various capabilities of autonomous multimodal agents. To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives. We conduct an extensive evaluation of state-of-the-art LLM-based autonomous agents, including several multimodal models. Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents. VisualWebArena provides a framework for evaluating multimodal autonomous language agents, and offers insights towards building stronger autonomous agents for the web. Our code, baseline models, and data is publicly available at https://jykoh.com/vwa.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2918404996 |
source | Free E- Journals |
subjects | Benchmarks Natural language processing Qualitative analysis State-of-the-art reviews Task complexity Visual tasks |
title | VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T14%3A43%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=VisualWebArena:%20Evaluating%20Multimodal%20Agents%20on%20Realistic%20Visual%20Web%20Tasks&rft.jtitle=arXiv.org&rft.au=Jing%20Yu%20Koh&rft.date=2024-01-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2918404996%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918404996&rft_id=info:pmid/&rfr_iscdi=true |