Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling

Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints. However, whether similar speedups can be established for reinforcement learning remains much less understood theoretically. Toward...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fabbro, Nicolò Dal, Mitra, Aritra, Pappas, George J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Fabbro, Nicolò Dal
Mitra, Aritra
Pappas, George J
description Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints. However, whether similar speedups can be established for reinforcement learning remains much less understood theoretically. Towards this direction, we study a federated policy evaluation problem where agents communicate via a central aggregator to expedite the evaluation of a common policy. To capture typical communication constraints in FL, we consider finite capacity up-link channels that can drop packets based on a Bernoulli erasure model. Given this setting, we propose and analyze QFedTD - a quantized federated temporal difference learning algorithm with linear function approximation. Our main technical contribution is to provide a finite-sample analysis of QFedTD that (i) highlights the effect of quantization and erasures on the convergence rate; and (ii) establishes a linear speedup w.r.t. the number of agents under Markovian sampling. Notably, while different quantization mechanisms and packet drop models have been extensively studied in the federated learning, distributed optimization, and networked control systems literature, our work is the first to provide a non-asymptotic analysis of their effects in multi-agent and federated reinforcement learning.
doi_str_mv 10.48550/arxiv.2305.08104
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_08104</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_08104</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-d1e2a8177ef3fb393815f853fdc71cfd1b69525d8673503f67519a37b10675853</originalsourceid><addsrcrecordid>eNotj0FOwzAURL1hgQoHYIUvkGDHceywQ6EBpCAkmn36U38Xi9SNnCaC22MKqxlpRk96hNxwluZaSnYH4cstaSaYTJnmLL8k2xoNBjihoe0jbRCCd35PjwsGWjvvTpi8x5WuA0xzQFp9gPc4TPe0cT6-6WZENPNIZx859BXC53Fx4OkGDuMQUVfkwsIw4fV_rkhbr9vqOWnenl6qhyaBQuWJ4ZiB5kqhFbYXpdBcWi2FNTvFd9bwvihlJo0ulJBM2EJJXoJQPWexxuOK3P5hz4rdGNwBwnf3q9qdVcUPNC9N8g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling</title><source>arXiv.org</source><creator>Fabbro, Nicolò Dal ; Mitra, Aritra ; Pappas, George J</creator><creatorcontrib>Fabbro, Nicolò Dal ; Mitra, Aritra ; Pappas, George J</creatorcontrib><description>Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints. However, whether similar speedups can be established for reinforcement learning remains much less understood theoretically. Towards this direction, we study a federated policy evaluation problem where agents communicate via a central aggregator to expedite the evaluation of a common policy. To capture typical communication constraints in FL, we consider finite capacity up-link channels that can drop packets based on a Bernoulli erasure model. Given this setting, we propose and analyze QFedTD - a quantized federated temporal difference learning algorithm with linear function approximation. Our main technical contribution is to provide a finite-sample analysis of QFedTD that (i) highlights the effect of quantization and erasures on the convergence rate; and (ii) establishes a linear speedup w.r.t. the number of agents under Markovian sampling. Notably, while different quantization mechanisms and packet drop models have been extensively studied in the federated learning, distributed optimization, and networked control systems literature, our work is the first to provide a non-asymptotic analysis of their effects in multi-agent and federated reinforcement learning.</description><identifier>DOI: 10.48550/arxiv.2305.08104</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Computer Science - Multiagent Systems ; Computer Science - Systems and Control ; Mathematics - Optimization and Control</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.08104$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.08104$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fabbro, Nicolò Dal</creatorcontrib><creatorcontrib>Mitra, Aritra</creatorcontrib><creatorcontrib>Pappas, George J</creatorcontrib><title>Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling</title><description>Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints. However, whether similar speedups can be established for reinforcement learning remains much less understood theoretically. Towards this direction, we study a federated policy evaluation problem where agents communicate via a central aggregator to expedite the evaluation of a common policy. To capture typical communication constraints in FL, we consider finite capacity up-link channels that can drop packets based on a Bernoulli erasure model. Given this setting, we propose and analyze QFedTD - a quantized federated temporal difference learning algorithm with linear function approximation. Our main technical contribution is to provide a finite-sample analysis of QFedTD that (i) highlights the effect of quantization and erasures on the convergence rate; and (ii) establishes a linear speedup w.r.t. the number of agents under Markovian sampling. Notably, while different quantization mechanisms and packet drop models have been extensively studied in the federated learning, distributed optimization, and networked control systems literature, our work is the first to provide a non-asymptotic analysis of their effects in multi-agent and federated reinforcement learning.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Multiagent Systems</subject><subject>Computer Science - Systems and Control</subject><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0FOwzAURL1hgQoHYIUvkGDHceywQ6EBpCAkmn36U38Xi9SNnCaC22MKqxlpRk96hNxwluZaSnYH4cstaSaYTJnmLL8k2xoNBjihoe0jbRCCd35PjwsGWjvvTpi8x5WuA0xzQFp9gPc4TPe0cT6-6WZENPNIZx859BXC53Fx4OkGDuMQUVfkwsIw4fV_rkhbr9vqOWnenl6qhyaBQuWJ4ZiB5kqhFbYXpdBcWi2FNTvFd9bwvihlJo0ulJBM2EJJXoJQPWexxuOK3P5hz4rdGNwBwnf3q9qdVcUPNC9N8g</recordid><startdate>20230514</startdate><enddate>20230514</enddate><creator>Fabbro, Nicolò Dal</creator><creator>Mitra, Aritra</creator><creator>Pappas, George J</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20230514</creationdate><title>Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling</title><author>Fabbro, Nicolò Dal ; Mitra, Aritra ; Pappas, George J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-d1e2a8177ef3fb393815f853fdc71cfd1b69525d8673503f67519a37b10675853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Multiagent Systems</topic><topic>Computer Science - Systems and Control</topic><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Fabbro, Nicolò Dal</creatorcontrib><creatorcontrib>Mitra, Aritra</creatorcontrib><creatorcontrib>Pappas, George J</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fabbro, Nicolò Dal</au><au>Mitra, Aritra</au><au>Pappas, George J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling</atitle><date>2023-05-14</date><risdate>2023</risdate><abstract>Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints. However, whether similar speedups can be established for reinforcement learning remains much less understood theoretically. Towards this direction, we study a federated policy evaluation problem where agents communicate via a central aggregator to expedite the evaluation of a common policy. To capture typical communication constraints in FL, we consider finite capacity up-link channels that can drop packets based on a Bernoulli erasure model. Given this setting, we propose and analyze QFedTD - a quantized federated temporal difference learning algorithm with linear function approximation. Our main technical contribution is to provide a finite-sample analysis of QFedTD that (i) highlights the effect of quantization and erasures on the convergence rate; and (ii) establishes a linear speedup w.r.t. the number of agents under Markovian sampling. Notably, while different quantization mechanisms and packet drop models have been extensively studied in the federated learning, distributed optimization, and networked control systems literature, our work is the first to provide a non-asymptotic analysis of their effects in multi-agent and federated reinforcement learning.</abstract><doi>10.48550/arxiv.2305.08104</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.08104
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_08104
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Computer Science - Multiagent Systems
Computer Science - Systems and Control
Mathematics - Optimization and Control
title Federated TD Learning over Finite-Rate Erasure Channels: Linear Speedup under Markovian Sampling
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T03%3A38%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Federated%20TD%20Learning%20over%20Finite-Rate%20Erasure%20Channels:%20Linear%20Speedup%20under%20Markovian%20Sampling&rft.au=Fabbro,%20Nicol%C3%B2%20Dal&rft.date=2023-05-14&rft_id=info:doi/10.48550/arxiv.2305.08104&rft_dat=%3Carxiv_GOX%3E2305_08104%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true