SteP: Stacked LLM Policies for Web Actions

Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sodhi, Paloma, Branavan, S. R. K, Artzi, Yoav, McDonald, Ryan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sodhi, Paloma
Branavan, S. R. K
Artzi, Yoav
McDonald, Ryan
description Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavior leaks between unrelated behaviors. Decomposition to distinct policies can address this challenge, but requires carefully handing off control between policies. We propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically compose policies to solve a diverse set of web tasks. SteP defines a Markov Decision Process where the state is a stack of policies representing the control state, i.e., the chain of policy calls. Unlike traditional methods that are restricted to static hierarchies, SteP enables dynamic control that adapts to the complexity of the task. We evaluate SteP against multiple baselines and web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on MiniWob++, SteP is competitive with prior works while using significantly less data. Our code and data are available at https://asappresearch.github.io/webagents-step.
doi_str_mv 10.48550/arxiv.2310.03720
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_03720</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_03720</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2310_037203</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgYKGBibGxlwMmgFl6QGWCkElyQmZ6emKPj4-CoE5OdkJmemFiuk5RcphKcmKTgml2Tm5xXzMLCmJeYUp_JCaW4GeTfXEGcPXbCp8QVFmbmJRZXxINPjwaYbE1YBAGmWLRw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SteP: Stacked LLM Policies for Web Actions</title><source>arXiv.org</source><creator>Sodhi, Paloma ; Branavan, S. R. K ; Artzi, Yoav ; McDonald, Ryan</creator><creatorcontrib>Sodhi, Paloma ; Branavan, S. R. K ; Artzi, Yoav ; McDonald, Ryan</creatorcontrib><description>Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavior leaks between unrelated behaviors. Decomposition to distinct policies can address this challenge, but requires carefully handing off control between policies. We propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically compose policies to solve a diverse set of web tasks. SteP defines a Markov Decision Process where the state is a stack of policies representing the control state, i.e., the chain of policy calls. Unlike traditional methods that are restricted to static hierarchies, SteP enables dynamic control that adapts to the complexity of the task. We evaluate SteP against multiple baselines and web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on MiniWob++, SteP is competitive with prior works while using significantly less data. Our code and data are available at https://asappresearch.github.io/webagents-step.</description><identifier>DOI: 10.48550/arxiv.2310.03720</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.03720$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.03720$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sodhi, Paloma</creatorcontrib><creatorcontrib>Branavan, S. R. K</creatorcontrib><creatorcontrib>Artzi, Yoav</creatorcontrib><creatorcontrib>McDonald, Ryan</creatorcontrib><title>SteP: Stacked LLM Policies for Web Actions</title><description>Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavior leaks between unrelated behaviors. Decomposition to distinct policies can address this challenge, but requires carefully handing off control between policies. We propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically compose policies to solve a diverse set of web tasks. SteP defines a Markov Decision Process where the state is a stack of policies representing the control state, i.e., the chain of policy calls. Unlike traditional methods that are restricted to static hierarchies, SteP enables dynamic control that adapts to the complexity of the task. We evaluate SteP against multiple baselines and web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on MiniWob++, SteP is competitive with prior works while using significantly less data. Our code and data are available at https://asappresearch.github.io/webagents-step.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgYKGBibGxlwMmgFl6QGWCkElyQmZ6emKPj4-CoE5OdkJmemFiuk5RcphKcmKTgml2Tm5xXzMLCmJeYUp_JCaW4GeTfXEGcPXbCp8QVFmbmJRZXxINPjwaYbE1YBAGmWLRw</recordid><startdate>20231005</startdate><enddate>20231005</enddate><creator>Sodhi, Paloma</creator><creator>Branavan, S. R. K</creator><creator>Artzi, Yoav</creator><creator>McDonald, Ryan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231005</creationdate><title>SteP: Stacked LLM Policies for Web Actions</title><author>Sodhi, Paloma ; Branavan, S. R. K ; Artzi, Yoav ; McDonald, Ryan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2310_037203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Sodhi, Paloma</creatorcontrib><creatorcontrib>Branavan, S. R. K</creatorcontrib><creatorcontrib>Artzi, Yoav</creatorcontrib><creatorcontrib>McDonald, Ryan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sodhi, Paloma</au><au>Branavan, S. R. K</au><au>Artzi, Yoav</au><au>McDonald, Ryan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SteP: Stacked LLM Policies for Web Actions</atitle><date>2023-10-05</date><risdate>2023</risdate><abstract>Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavior leaks between unrelated behaviors. Decomposition to distinct policies can address this challenge, but requires carefully handing off control between policies. We propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically compose policies to solve a diverse set of web tasks. SteP defines a Markov Decision Process where the state is a stack of policies representing the control state, i.e., the chain of policy calls. Unlike traditional methods that are restricted to static hierarchies, SteP enables dynamic control that adapts to the complexity of the task. We evaluate SteP against multiple baselines and web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on MiniWob++, SteP is competitive with prior works while using significantly less data. Our code and data are available at https://asappresearch.github.io/webagents-step.</abstract><doi>10.48550/arxiv.2310.03720</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.03720
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_03720
source arXiv.org
subjects Computer Science - Learning
title SteP: Stacked LLM Policies for Web Actions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-04T12%3A27%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SteP:%20Stacked%20LLM%20Policies%20for%20Web%20Actions&rft.au=Sodhi,%20Paloma&rft.date=2023-10-05&rft_id=info:doi/10.48550/arxiv.2310.03720&rft_dat=%3Carxiv_GOX%3E2310_03720%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true