Explaining Decisions of Agents in Mixed-Motive Games
In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-07 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Orner, Maayan Maksimov, Oleg Kleinerman, Akiva Ortiz, Charles Kraus, Sarit |
description | In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision-making in such environments is challenging, and humans can benefit from obtaining explanations. However, such environments and scenarios have rarely been explored in the context of explainable AI. While some explanation methods for cooperative environments can be applied in mixed-motive setups, they do not address inter-agent competition, cheap-talk, or implicit communication by actions. In this work, we design explanation methods to address these issues. Then, we proceed to demonstrate their effectiveness and usefulness for humans, using a non-trivial mixed-motive game as a test case. Lastly, we establish generality and demonstrate the applicability of the methods to other games, including one where we mimic human game actions using large language models. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3083762811</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3083762811</sourcerecordid><originalsourceid>FETCH-proquest_journals_30837628113</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwca0oyEnMzMvMS1dwSU3OLM7MzytWyE9TcExPzSspVsjMU_DNrEhN0fXNL8ksS1VwT8xNLeZhYE1LzClO5YXS3AzKbq4hzh66BUX5haWpxSXxWfmlRXlAqXhjAwtjczMjC0NDY-JUAQAc_TQK</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3083762811</pqid></control><display><type>article</type><title>Explaining Decisions of Agents in Mixed-Motive Games</title><source>Free E- Journals</source><creator>Orner, Maayan ; Maksimov, Oleg ; Kleinerman, Akiva ; Ortiz, Charles ; Kraus, Sarit</creator><creatorcontrib>Orner, Maayan ; Maksimov, Oleg ; Kleinerman, Akiva ; Ortiz, Charles ; Kraus, Sarit</creatorcontrib><description>In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision-making in such environments is challenging, and humans can benefit from obtaining explanations. However, such environments and scenarios have rarely been explored in the context of explainable AI. While some explanation methods for cooperative environments can be applied in mixed-motive setups, they do not address inter-agent competition, cheap-talk, or implicit communication by actions. In this work, we design explanation methods to address these issues. Then, we proceed to demonstrate their effectiveness and usefulness for humans, using a non-trivial mixed-motive game as a test case. Lastly, we establish generality and demonstrate the applicability of the methods to other games, including one where we mimic human game actions using large language models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Communication ; Competition ; Cooperation ; Explainable artificial intelligence ; Game theory ; Games ; Large language models</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>778,782</link.rule.ids></links><search><creatorcontrib>Orner, Maayan</creatorcontrib><creatorcontrib>Maksimov, Oleg</creatorcontrib><creatorcontrib>Kleinerman, Akiva</creatorcontrib><creatorcontrib>Ortiz, Charles</creatorcontrib><creatorcontrib>Kraus, Sarit</creatorcontrib><title>Explaining Decisions of Agents in Mixed-Motive Games</title><title>arXiv.org</title><description>In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision-making in such environments is challenging, and humans can benefit from obtaining explanations. However, such environments and scenarios have rarely been explored in the context of explainable AI. While some explanation methods for cooperative environments can be applied in mixed-motive setups, they do not address inter-agent competition, cheap-talk, or implicit communication by actions. In this work, we design explanation methods to address these issues. Then, we proceed to demonstrate their effectiveness and usefulness for humans, using a non-trivial mixed-motive game as a test case. Lastly, we establish generality and demonstrate the applicability of the methods to other games, including one where we mimic human game actions using large language models.</description><subject>Communication</subject><subject>Competition</subject><subject>Cooperation</subject><subject>Explainable artificial intelligence</subject><subject>Game theory</subject><subject>Games</subject><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwca0oyEnMzMvMS1dwSU3OLM7MzytWyE9TcExPzSspVsjMU_DNrEhN0fXNL8ksS1VwT8xNLeZhYE1LzClO5YXS3AzKbq4hzh66BUX5haWpxSXxWfmlRXlAqXhjAwtjczMjC0NDY-JUAQAc_TQK</recordid><startdate>20240721</startdate><enddate>20240721</enddate><creator>Orner, Maayan</creator><creator>Maksimov, Oleg</creator><creator>Kleinerman, Akiva</creator><creator>Ortiz, Charles</creator><creator>Kraus, Sarit</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240721</creationdate><title>Explaining Decisions of Agents in Mixed-Motive Games</title><author>Orner, Maayan ; Maksimov, Oleg ; Kleinerman, Akiva ; Ortiz, Charles ; Kraus, Sarit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30837628113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Communication</topic><topic>Competition</topic><topic>Cooperation</topic><topic>Explainable artificial intelligence</topic><topic>Game theory</topic><topic>Games</topic><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Orner, Maayan</creatorcontrib><creatorcontrib>Maksimov, Oleg</creatorcontrib><creatorcontrib>Kleinerman, Akiva</creatorcontrib><creatorcontrib>Ortiz, Charles</creatorcontrib><creatorcontrib>Kraus, Sarit</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Orner, Maayan</au><au>Maksimov, Oleg</au><au>Kleinerman, Akiva</au><au>Ortiz, Charles</au><au>Kraus, Sarit</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Explaining Decisions of Agents in Mixed-Motive Games</atitle><jtitle>arXiv.org</jtitle><date>2024-07-21</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision-making in such environments is challenging, and humans can benefit from obtaining explanations. However, such environments and scenarios have rarely been explored in the context of explainable AI. While some explanation methods for cooperative environments can be applied in mixed-motive setups, they do not address inter-agent competition, cheap-talk, or implicit communication by actions. In this work, we design explanation methods to address these issues. Then, we proceed to demonstrate their effectiveness and usefulness for humans, using a non-trivial mixed-motive game as a test case. Lastly, we establish generality and demonstrate the applicability of the methods to other games, including one where we mimic human game actions using large language models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3083762811 |
source | Free E- Journals |
subjects | Communication Competition Cooperation Explainable artificial intelligence Game theory Games Large language models |
title | Explaining Decisions of Agents in Mixed-Motive Games |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T08%3A39%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Explaining%20Decisions%20of%20Agents%20in%20Mixed-Motive%20Games&rft.jtitle=arXiv.org&rft.au=Orner,%20Maayan&rft.date=2024-07-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3083762811%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3083762811&rft_id=info:pmid/&rfr_iscdi=true |