Sample-efficient multi-agent reinforcement learning with masked reconstruction
Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training ti...
Gespeichert in:
Veröffentlicht in: | PloS one 2023-09, Vol.18 (9), p.e0291545 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 9 |
container_start_page | e0291545 |
container_title | PloS one |
container_volume | 18 |
creator | Kim, Jung In Lee, Young Jae Heo, Jongkook Park, Jinhyeok Kim, Jaehoon Lim, Sae Rin Jeong, Jinyong Kim, Seoung Bum |
description | Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training times and large amounts of data to learn optimal policies. These limitations are more pronounced in the context of multi-agent reinforcement learning (MARL). To address these limitations, various studies have been conducted to improve DRL. In this study, we propose an approach that combines a masked reconstruction task with QMIX (M-QMIX). By introducing a masked reconstruction task as an auxiliary task, we aim to achieve enhanced sample efficiency-a fundamental limitation of RL in multi-agent systems. Experiments were conducted using the StarCraft II micromanagement benchmark to validate the effectiveness of the proposed method. We used 11 scenarios comprising five easy, three hard, and three very hard scenarios. We particularly focused on using a limited number of time steps for each scenario to demonstrate the improved sample efficiency. Compared to QMIX, the proposed method is superior in eight of the 11 scenarios. These results provide strong evidence that the proposed method is more sample-efficient than QMIX, demonstrating that it effectively addresses the limitations of DRL in multi-agent systems. |
doi_str_mv | 10.1371/journal.pone.0291545 |
format | Article |
fullrecord | <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_2864887681</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A765314780</galeid><doaj_id>oai_doaj_org_article_5555aee3bfff4f57b641045fb0070cb6</doaj_id><sourcerecordid>A765314780</sourcerecordid><originalsourceid>FETCH-LOGICAL-c642t-bbf5075e95d6b49cf2592f0ffca5f106c1d57dba5563b585555104a4739751c33</originalsourceid><addsrcrecordid>eNqNkktv1DAUhSMEomXgHyAYCQnBIoMdx3ayQlXFY6SKShTYWo5zPePBiQfb4fHvcTppNUFdkCziON85N773ZNlTjFaYcPxm5wbfS7vaux5WqKgxLem97BTXpMhZgcj9o_VJ9iiEHUKUVIw9zE4I56hKgtPs05Xs9hZy0NooA31cdoONJpebce3B9Np5Bd34ZkH63vSb5S8Tt8tOhu_QJkS5PkQ_qGhc_zh7oKUN8GR6LrKv7999Of-YX1x-WJ-fXeSKlUXMm0ZTxCnUtGVNWStd0LrQSGslqcaIKdxS3jaSUkYaWtF0YVTKkpOaU6wIWWTPD75764KYWhFEUbGyqjircCLWB6J1cif23nTS_xFOGnG94fxGSB-NsiBGewlAGq11qSlvWJmqUd0gxJFqWPJ6O1Ubmg5alZrhpZ2Zzr_0Zis27qfAiCJMGU8OryYH734MEKLoTFBgrezBDdc_TnlV4DSxRfbiH_Tu403URqYTjFNKhdVoKs44owSXvEKJWt1BpbuFzqS5gTZpfyZ4PRMkJsLvuJFDCGJ99fn_2ctvc_blEbsFaeM2ODuMmQlzsDyAyrsQPOjbLmMkxtzfdEOMuRdT7pPs2fGEbkU3QSd_Aaii_XE</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2864887681</pqid></control><display><type>article</type><title>Sample-efficient multi-agent reinforcement learning with masked reconstruction</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Public Library of Science (PLoS) Journals Open Access</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Kim, Jung In ; Lee, Young Jae ; Heo, Jongkook ; Park, Jinhyeok ; Kim, Jaehoon ; Lim, Sae Rin ; Jeong, Jinyong ; Kim, Seoung Bum</creator><contributor>Guo, Peng</contributor><creatorcontrib>Kim, Jung In ; Lee, Young Jae ; Heo, Jongkook ; Park, Jinhyeok ; Kim, Jaehoon ; Lim, Sae Rin ; Jeong, Jinyong ; Kim, Seoung Bum ; Guo, Peng</creatorcontrib><description>Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training times and large amounts of data to learn optimal policies. These limitations are more pronounced in the context of multi-agent reinforcement learning (MARL). To address these limitations, various studies have been conducted to improve DRL. In this study, we propose an approach that combines a masked reconstruction task with QMIX (M-QMIX). By introducing a masked reconstruction task as an auxiliary task, we aim to achieve enhanced sample efficiency-a fundamental limitation of RL in multi-agent systems. Experiments were conducted using the StarCraft II micromanagement benchmark to validate the effectiveness of the proposed method. We used 11 scenarios comprising five easy, three hard, and three very hard scenarios. We particularly focused on using a limited number of time steps for each scenario to demonstrate the improved sample efficiency. Compared to QMIX, the proposed method is superior in eight of the 11 scenarios. These results provide strong evidence that the proposed method is more sample-efficient than QMIX, demonstrating that it effectively addresses the limitations of DRL in multi-agent systems.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0291545</identifier><identifier>PMID: 37708154</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Benchmarking ; Biology and Life Sciences ; Computer and Information Sciences ; Data mining ; Decision making ; Deep learning ; Efficiency ; Evaluation ; Learning ; Methods ; Multiagent systems ; Physical Sciences ; Policy ; Reconstruction ; Reinforcement ; Reinforcement learning (Machine learning) ; Reinforcement, Psychology ; Research and Analysis Methods ; Social Sciences ; System effectiveness</subject><ispartof>PloS one, 2023-09, Vol.18 (9), p.e0291545</ispartof><rights>Copyright: © 2023 Kim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</rights><rights>COPYRIGHT 2023 Public Library of Science</rights><rights>2023 Kim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2023 Kim et al 2023 Kim et al</rights><rights>2023 Kim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c642t-bbf5075e95d6b49cf2592f0ffca5f106c1d57dba5563b585555104a4739751c33</cites><orcidid>0000-0002-4773-6467 ; 0000-0002-2205-8516 ; 0000-0002-1309-0602 ; 0009-0005-7437-5985</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10501567/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10501567/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,864,885,2102,2928,23866,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37708154$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Guo, Peng</contributor><creatorcontrib>Kim, Jung In</creatorcontrib><creatorcontrib>Lee, Young Jae</creatorcontrib><creatorcontrib>Heo, Jongkook</creatorcontrib><creatorcontrib>Park, Jinhyeok</creatorcontrib><creatorcontrib>Kim, Jaehoon</creatorcontrib><creatorcontrib>Lim, Sae Rin</creatorcontrib><creatorcontrib>Jeong, Jinyong</creatorcontrib><creatorcontrib>Kim, Seoung Bum</creatorcontrib><title>Sample-efficient multi-agent reinforcement learning with masked reconstruction</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training times and large amounts of data to learn optimal policies. These limitations are more pronounced in the context of multi-agent reinforcement learning (MARL). To address these limitations, various studies have been conducted to improve DRL. In this study, we propose an approach that combines a masked reconstruction task with QMIX (M-QMIX). By introducing a masked reconstruction task as an auxiliary task, we aim to achieve enhanced sample efficiency-a fundamental limitation of RL in multi-agent systems. Experiments were conducted using the StarCraft II micromanagement benchmark to validate the effectiveness of the proposed method. We used 11 scenarios comprising five easy, three hard, and three very hard scenarios. We particularly focused on using a limited number of time steps for each scenario to demonstrate the improved sample efficiency. Compared to QMIX, the proposed method is superior in eight of the 11 scenarios. These results provide strong evidence that the proposed method is more sample-efficient than QMIX, demonstrating that it effectively addresses the limitations of DRL in multi-agent systems.</description><subject>Benchmarking</subject><subject>Biology and Life Sciences</subject><subject>Computer and Information Sciences</subject><subject>Data mining</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Efficiency</subject><subject>Evaluation</subject><subject>Learning</subject><subject>Methods</subject><subject>Multiagent systems</subject><subject>Physical Sciences</subject><subject>Policy</subject><subject>Reconstruction</subject><subject>Reinforcement</subject><subject>Reinforcement learning (Machine learning)</subject><subject>Reinforcement, Psychology</subject><subject>Research and Analysis Methods</subject><subject>Social Sciences</subject><subject>System effectiveness</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqNkktv1DAUhSMEomXgHyAYCQnBIoMdx3ayQlXFY6SKShTYWo5zPePBiQfb4fHvcTppNUFdkCziON85N773ZNlTjFaYcPxm5wbfS7vaux5WqKgxLem97BTXpMhZgcj9o_VJ9iiEHUKUVIw9zE4I56hKgtPs05Xs9hZy0NooA31cdoONJpebce3B9Np5Bd34ZkH63vSb5S8Tt8tOhu_QJkS5PkQ_qGhc_zh7oKUN8GR6LrKv7999Of-YX1x-WJ-fXeSKlUXMm0ZTxCnUtGVNWStd0LrQSGslqcaIKdxS3jaSUkYaWtF0YVTKkpOaU6wIWWTPD75764KYWhFEUbGyqjircCLWB6J1cif23nTS_xFOGnG94fxGSB-NsiBGewlAGq11qSlvWJmqUd0gxJFqWPJ6O1Ubmg5alZrhpZ2Zzr_0Zis27qfAiCJMGU8OryYH734MEKLoTFBgrezBDdc_TnlV4DSxRfbiH_Tu403URqYTjFNKhdVoKs44owSXvEKJWt1BpbuFzqS5gTZpfyZ4PRMkJsLvuJFDCGJ99fn_2ctvc_blEbsFaeM2ODuMmQlzsDyAyrsQPOjbLmMkxtzfdEOMuRdT7pPs2fGEbkU3QSd_Aaii_XE</recordid><startdate>20230914</startdate><enddate>20230914</enddate><creator>Kim, Jung In</creator><creator>Lee, Young Jae</creator><creator>Heo, Jongkook</creator><creator>Park, Jinhyeok</creator><creator>Kim, Jaehoon</creator><creator>Lim, Sae Rin</creator><creator>Jeong, Jinyong</creator><creator>Kim, Seoung Bum</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4773-6467</orcidid><orcidid>https://orcid.org/0000-0002-2205-8516</orcidid><orcidid>https://orcid.org/0000-0002-1309-0602</orcidid><orcidid>https://orcid.org/0009-0005-7437-5985</orcidid></search><sort><creationdate>20230914</creationdate><title>Sample-efficient multi-agent reinforcement learning with masked reconstruction</title><author>Kim, Jung In ; Lee, Young Jae ; Heo, Jongkook ; Park, Jinhyeok ; Kim, Jaehoon ; Lim, Sae Rin ; Jeong, Jinyong ; Kim, Seoung Bum</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c642t-bbf5075e95d6b49cf2592f0ffca5f106c1d57dba5563b585555104a4739751c33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Benchmarking</topic><topic>Biology and Life Sciences</topic><topic>Computer and Information Sciences</topic><topic>Data mining</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Efficiency</topic><topic>Evaluation</topic><topic>Learning</topic><topic>Methods</topic><topic>Multiagent systems</topic><topic>Physical Sciences</topic><topic>Policy</topic><topic>Reconstruction</topic><topic>Reinforcement</topic><topic>Reinforcement learning (Machine learning)</topic><topic>Reinforcement, Psychology</topic><topic>Research and Analysis Methods</topic><topic>Social Sciences</topic><topic>System effectiveness</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Jung In</creatorcontrib><creatorcontrib>Lee, Young Jae</creatorcontrib><creatorcontrib>Heo, Jongkook</creatorcontrib><creatorcontrib>Park, Jinhyeok</creatorcontrib><creatorcontrib>Kim, Jaehoon</creatorcontrib><creatorcontrib>Lim, Sae Rin</creatorcontrib><creatorcontrib>Jeong, Jinyong</creatorcontrib><creatorcontrib>Kim, Seoung Bum</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Opposing Viewpoints</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing & Allied Health Database</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological & Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>Agricultural & Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>Meteorological & Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agricultural Science Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Nursing & Allied Health Premium</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Jung In</au><au>Lee, Young Jae</au><au>Heo, Jongkook</au><au>Park, Jinhyeok</au><au>Kim, Jaehoon</au><au>Lim, Sae Rin</au><au>Jeong, Jinyong</au><au>Kim, Seoung Bum</au><au>Guo, Peng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sample-efficient multi-agent reinforcement learning with masked reconstruction</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2023-09-14</date><risdate>2023</risdate><volume>18</volume><issue>9</issue><spage>e0291545</spage><pages>e0291545-</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training times and large amounts of data to learn optimal policies. These limitations are more pronounced in the context of multi-agent reinforcement learning (MARL). To address these limitations, various studies have been conducted to improve DRL. In this study, we propose an approach that combines a masked reconstruction task with QMIX (M-QMIX). By introducing a masked reconstruction task as an auxiliary task, we aim to achieve enhanced sample efficiency-a fundamental limitation of RL in multi-agent systems. Experiments were conducted using the StarCraft II micromanagement benchmark to validate the effectiveness of the proposed method. We used 11 scenarios comprising five easy, three hard, and three very hard scenarios. We particularly focused on using a limited number of time steps for each scenario to demonstrate the improved sample efficiency. Compared to QMIX, the proposed method is superior in eight of the 11 scenarios. These results provide strong evidence that the proposed method is more sample-efficient than QMIX, demonstrating that it effectively addresses the limitations of DRL in multi-agent systems.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>37708154</pmid><doi>10.1371/journal.pone.0291545</doi><tpages>e0291545</tpages><orcidid>https://orcid.org/0000-0002-4773-6467</orcidid><orcidid>https://orcid.org/0000-0002-2205-8516</orcidid><orcidid>https://orcid.org/0000-0002-1309-0602</orcidid><orcidid>https://orcid.org/0009-0005-7437-5985</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1932-6203 |
ispartof | PloS one, 2023-09, Vol.18 (9), p.e0291545 |
issn | 1932-6203 1932-6203 |
language | eng |
recordid | cdi_plos_journals_2864887681 |
source | MEDLINE; DOAJ Directory of Open Access Journals; Public Library of Science (PLoS) Journals Open Access; EZB-FREE-00999 freely available EZB journals; PubMed Central; Free Full-Text Journals in Chemistry |
subjects | Benchmarking Biology and Life Sciences Computer and Information Sciences Data mining Decision making Deep learning Efficiency Evaluation Learning Methods Multiagent systems Physical Sciences Policy Reconstruction Reinforcement Reinforcement learning (Machine learning) Reinforcement, Psychology Research and Analysis Methods Social Sciences System effectiveness |
title | Sample-efficient multi-agent reinforcement learning with masked reconstruction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T20%3A03%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sample-efficient%20multi-agent%20reinforcement%20learning%20with%20masked%20reconstruction&rft.jtitle=PloS%20one&rft.au=Kim,%20Jung%20In&rft.date=2023-09-14&rft.volume=18&rft.issue=9&rft.spage=e0291545&rft.pages=e0291545-&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0291545&rft_dat=%3Cgale_plos_%3EA765314780%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2864887681&rft_id=info:pmid/37708154&rft_galeid=A765314780&rft_doaj_id=oai_doaj_org_article_5555aee3bfff4f57b641045fb0070cb6&rfr_iscdi=true |