Learning Mixed Strategies in Trajectory Games

In multi-agent settings, game theory is a natural framework for describing the strategic interactions of agents whose objectives depend upon one another's behavior. Trajectory games capture these complex effects by design. In competitive settings, this makes them a more faithful interaction mod...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-05
Hauptverfasser: Peters, Lasse, Fridovich-Keil, David, Ferranti, Laura, Stachniss, Cyrill, Alonso-Mora, Javier, Laine, rest
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Peters, Lasse
Fridovich-Keil, David
Ferranti, Laura
Stachniss, Cyrill
Alonso-Mora, Javier
Laine, rest
description In multi-agent settings, game theory is a natural framework for describing the strategic interactions of agents whose objectives depend upon one another's behavior. Trajectory games capture these complex effects by design. In competitive settings, this makes them a more faithful interaction model than traditional "predict then plan" approaches. However, current game-theoretic planning methods have important limitations. In this work, we propose two main contributions. First, we introduce an offline training phase which reduces the online computational burden of solving trajectory games. Second, we formulate a lifted game which allows players to optimize multiple candidate trajectories in unison and thereby construct more competitive "mixed" strategies. We validate our approach on a number of experiments using the pursuit-evasion game "tag."
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2659395428</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2659395428</sourcerecordid><originalsourceid>FETCH-proquest_journals_26593954283</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQ9UlNLMrLzEtX8M2sSE1RCC4pSixJTc9MLVbIzFMIKUrMSk0uyS-qVHBPzE0t5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCMzU0tjS1MTIwtj4lQBAK2PMek</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2659395428</pqid></control><display><type>article</type><title>Learning Mixed Strategies in Trajectory Games</title><source>Free E- Journals</source><creator>Peters, Lasse ; Fridovich-Keil, David ; Ferranti, Laura ; Stachniss, Cyrill ; Alonso-Mora, Javier ; Laine, rest</creator><creatorcontrib>Peters, Lasse ; Fridovich-Keil, David ; Ferranti, Laura ; Stachniss, Cyrill ; Alonso-Mora, Javier ; Laine, rest</creatorcontrib><description>In multi-agent settings, game theory is a natural framework for describing the strategic interactions of agents whose objectives depend upon one another's behavior. Trajectory games capture these complex effects by design. In competitive settings, this makes them a more faithful interaction model than traditional "predict then plan" approaches. However, current game-theoretic planning methods have important limitations. In this work, we propose two main contributions. First, we introduce an offline training phase which reduces the online computational burden of solving trajectory games. Second, we formulate a lifted game which allows players to optimize multiple candidate trajectories in unison and thereby construct more competitive "mixed" strategies. We validate our approach on a number of experiments using the pursuit-evasion game "tag."</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Business competition ; Game theory ; Games ; Interaction models ; Multiagent systems ; Pursuit-evasion games</subject><ispartof>arXiv.org, 2022-05</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Peters, Lasse</creatorcontrib><creatorcontrib>Fridovich-Keil, David</creatorcontrib><creatorcontrib>Ferranti, Laura</creatorcontrib><creatorcontrib>Stachniss, Cyrill</creatorcontrib><creatorcontrib>Alonso-Mora, Javier</creatorcontrib><creatorcontrib>Laine, rest</creatorcontrib><title>Learning Mixed Strategies in Trajectory Games</title><title>arXiv.org</title><description>In multi-agent settings, game theory is a natural framework for describing the strategic interactions of agents whose objectives depend upon one another's behavior. Trajectory games capture these complex effects by design. In competitive settings, this makes them a more faithful interaction model than traditional "predict then plan" approaches. However, current game-theoretic planning methods have important limitations. In this work, we propose two main contributions. First, we introduce an offline training phase which reduces the online computational burden of solving trajectory games. Second, we formulate a lifted game which allows players to optimize multiple candidate trajectories in unison and thereby construct more competitive "mixed" strategies. We validate our approach on a number of experiments using the pursuit-evasion game "tag."</description><subject>Business competition</subject><subject>Game theory</subject><subject>Games</subject><subject>Interaction models</subject><subject>Multiagent systems</subject><subject>Pursuit-evasion games</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQ9UlNLMrLzEtX8M2sSE1RCC4pSixJTc9MLVbIzFMIKUrMSk0uyS-qVHBPzE0t5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCMzU0tjS1MTIwtj4lQBAK2PMek</recordid><startdate>20220503</startdate><enddate>20220503</enddate><creator>Peters, Lasse</creator><creator>Fridovich-Keil, David</creator><creator>Ferranti, Laura</creator><creator>Stachniss, Cyrill</creator><creator>Alonso-Mora, Javier</creator><creator>Laine, rest</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220503</creationdate><title>Learning Mixed Strategies in Trajectory Games</title><author>Peters, Lasse ; Fridovich-Keil, David ; Ferranti, Laura ; Stachniss, Cyrill ; Alonso-Mora, Javier ; Laine, rest</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26593954283</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Business competition</topic><topic>Game theory</topic><topic>Games</topic><topic>Interaction models</topic><topic>Multiagent systems</topic><topic>Pursuit-evasion games</topic><toplevel>online_resources</toplevel><creatorcontrib>Peters, Lasse</creatorcontrib><creatorcontrib>Fridovich-Keil, David</creatorcontrib><creatorcontrib>Ferranti, Laura</creatorcontrib><creatorcontrib>Stachniss, Cyrill</creatorcontrib><creatorcontrib>Alonso-Mora, Javier</creatorcontrib><creatorcontrib>Laine, rest</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Peters, Lasse</au><au>Fridovich-Keil, David</au><au>Ferranti, Laura</au><au>Stachniss, Cyrill</au><au>Alonso-Mora, Javier</au><au>Laine, rest</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning Mixed Strategies in Trajectory Games</atitle><jtitle>arXiv.org</jtitle><date>2022-05-03</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>In multi-agent settings, game theory is a natural framework for describing the strategic interactions of agents whose objectives depend upon one another's behavior. Trajectory games capture these complex effects by design. In competitive settings, this makes them a more faithful interaction model than traditional "predict then plan" approaches. However, current game-theoretic planning methods have important limitations. In this work, we propose two main contributions. First, we introduce an offline training phase which reduces the online computational burden of solving trajectory games. Second, we formulate a lifted game which allows players to optimize multiple candidate trajectories in unison and thereby construct more competitive "mixed" strategies. We validate our approach on a number of experiments using the pursuit-evasion game "tag."</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2659395428
source Free E- Journals
subjects Business competition
Game theory
Games
Interaction models
Multiagent systems
Pursuit-evasion games
title Learning Mixed Strategies in Trajectory Games
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T21%3A17%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20Mixed%20Strategies%20in%20Trajectory%20Games&rft.jtitle=arXiv.org&rft.au=Peters,%20Lasse&rft.date=2022-05-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2659395428%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2659395428&rft_id=info:pmid/&rfr_iscdi=true