Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning
Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-04 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Haldar, Stav Barge, Pratik J Khatri, Sumeet Hwang, Lee |
description | Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of nodes, both homogeneous and inhomogeneous, that take practical limitations such as photon losses, non-ideal measurements, and quantum memories with short coherence times into account. For a wide range of parameters, our policies improve upon previously known policies, such as the "swap-as-soon-as-possible" policy, with respect to both the waiting time and the fidelity of the end-to-end entanglement. This improvement is greatest for the most practically relevant cases, namely, for short coherence times, high link losses, and highly asymmetric links. To obtain our results, we model entanglement distribution using a Markov decision process, and then we use the Q-learning reinforcement learning (RL) algorithm to discover new policies. These new policies are characterized by dynamic, state-dependent memory cutoffs and collaboration between the nodes. In particular, we quantify this collaboration between the nodes. Our quantifiers tell us how much "global" knowledge of the network every node has. Finally, our understanding of the performance of large quantum networks is currently limited by the computational inefficiency of simulating them using RL or other optimization methods. Thus, in this work, we present a method for nesting policies in order to obtain policies for large repeater chains. By nesting our RL-based policies for small repeater chains, we obtain policies for large repeater chains that improve upon the swap-as-soon-as-possible policy, and thus we pave the way for a scalable method for obtaining policies for long-distance entanglement distribution. |
doi_str_mv | 10.48550/arxiv.2303.00777 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2303_00777</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2782030767</sourcerecordid><originalsourceid>FETCH-LOGICAL-a957-6a5efdb1eb73ff046637fa27fbe7f5c9ef7cd8b49f00b470b503953acb972bc13</originalsourceid><addsrcrecordid>eNotkMFOwzAQRC0kJKrSD-CEJc4pjh3HCTdUUUCqxKX3yE7WxZXjpLZT4MyP47acdnf2aTQahO5ysiwqzsmj9N_muKSMsCUhQogrNKOM5VlVUHqDFiHsCSG0FJRzNkO_axkilq7DHqyRygIGF6XbWejTgjsTojdqimZw-MvET3yYpItTn_gRZAQfnvDojWvNaCFgPXhs-tEPR-N26THEoR1swFM43R6MS0R78bYgvUvyLbrW0gZY_M852q5ftqu3bPPx-r563mSy5iIrJQfdqRyUYFqToiyZ0JIKrUBo3tagRdtVqqg1IaoQRHHCas5kq2pBVZuzObq_2J4balLoXvqf5tRUc24qEQ8XIuU-TBBisx8m71KmhoqKEkZEKdgfCjhwvQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2782030767</pqid></control><display><type>article</type><title>Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Haldar, Stav ; Barge, Pratik J ; Khatri, Sumeet ; Hwang, Lee</creator><creatorcontrib>Haldar, Stav ; Barge, Pratik J ; Khatri, Sumeet ; Hwang, Lee</creatorcontrib><description>Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of nodes, both homogeneous and inhomogeneous, that take practical limitations such as photon losses, non-ideal measurements, and quantum memories with short coherence times into account. For a wide range of parameters, our policies improve upon previously known policies, such as the "swap-as-soon-as-possible" policy, with respect to both the waiting time and the fidelity of the end-to-end entanglement. This improvement is greatest for the most practically relevant cases, namely, for short coherence times, high link losses, and highly asymmetric links. To obtain our results, we model entanglement distribution using a Markov decision process, and then we use the Q-learning reinforcement learning (RL) algorithm to discover new policies. These new policies are characterized by dynamic, state-dependent memory cutoffs and collaboration between the nodes. In particular, we quantify this collaboration between the nodes. Our quantifiers tell us how much "global" knowledge of the network every node has. Finally, our understanding of the performance of large quantum networks is currently limited by the computational inefficiency of simulating them using RL or other optimization methods. Thus, in this work, we present a method for nesting policies in order to obtain policies for large repeater chains. By nesting our RL-based policies for small repeater chains, we obtain policies for large repeater chains that improve upon the swap-as-soon-as-possible policy, and thus we pave the way for a scalable method for obtaining policies for long-distance entanglement distribution.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2303.00777</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Algorithms ; Chains ; Coherence ; Collaboration ; Cooperation ; Machine learning ; Markov processes ; Nesting ; Nodes ; Optimization ; Physics - Quantum Physics ; Policies ; Quantum computing ; Quantum entanglement</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.00777$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1103/PhysRevApplied.21.024041$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Haldar, Stav</creatorcontrib><creatorcontrib>Barge, Pratik J</creatorcontrib><creatorcontrib>Khatri, Sumeet</creatorcontrib><creatorcontrib>Hwang, Lee</creatorcontrib><title>Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning</title><title>arXiv.org</title><description>Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of nodes, both homogeneous and inhomogeneous, that take practical limitations such as photon losses, non-ideal measurements, and quantum memories with short coherence times into account. For a wide range of parameters, our policies improve upon previously known policies, such as the "swap-as-soon-as-possible" policy, with respect to both the waiting time and the fidelity of the end-to-end entanglement. This improvement is greatest for the most practically relevant cases, namely, for short coherence times, high link losses, and highly asymmetric links. To obtain our results, we model entanglement distribution using a Markov decision process, and then we use the Q-learning reinforcement learning (RL) algorithm to discover new policies. These new policies are characterized by dynamic, state-dependent memory cutoffs and collaboration between the nodes. In particular, we quantify this collaboration between the nodes. Our quantifiers tell us how much "global" knowledge of the network every node has. Finally, our understanding of the performance of large quantum networks is currently limited by the computational inefficiency of simulating them using RL or other optimization methods. Thus, in this work, we present a method for nesting policies in order to obtain policies for large repeater chains. By nesting our RL-based policies for small repeater chains, we obtain policies for large repeater chains that improve upon the swap-as-soon-as-possible policy, and thus we pave the way for a scalable method for obtaining policies for long-distance entanglement distribution.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Chains</subject><subject>Coherence</subject><subject>Collaboration</subject><subject>Cooperation</subject><subject>Machine learning</subject><subject>Markov processes</subject><subject>Nesting</subject><subject>Nodes</subject><subject>Optimization</subject><subject>Physics - Quantum Physics</subject><subject>Policies</subject><subject>Quantum computing</subject><subject>Quantum entanglement</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotkMFOwzAQRC0kJKrSD-CEJc4pjh3HCTdUUUCqxKX3yE7WxZXjpLZT4MyP47acdnf2aTQahO5ysiwqzsmj9N_muKSMsCUhQogrNKOM5VlVUHqDFiHsCSG0FJRzNkO_axkilq7DHqyRygIGF6XbWejTgjsTojdqimZw-MvET3yYpItTn_gRZAQfnvDojWvNaCFgPXhs-tEPR-N26THEoR1swFM43R6MS0R78bYgvUvyLbrW0gZY_M852q5ftqu3bPPx-r563mSy5iIrJQfdqRyUYFqToiyZ0JIKrUBo3tagRdtVqqg1IaoQRHHCas5kq2pBVZuzObq_2J4balLoXvqf5tRUc24qEQ8XIuU-TBBisx8m71KmhoqKEkZEKdgfCjhwvQ</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Haldar, Stav</creator><creator>Barge, Pratik J</creator><creator>Khatri, Sumeet</creator><creator>Hwang, Lee</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>GOX</scope></search><sort><creationdate>20240401</creationdate><title>Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning</title><author>Haldar, Stav ; Barge, Pratik J ; Khatri, Sumeet ; Hwang, Lee</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a957-6a5efdb1eb73ff046637fa27fbe7f5c9ef7cd8b49f00b470b503953acb972bc13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Chains</topic><topic>Coherence</topic><topic>Collaboration</topic><topic>Cooperation</topic><topic>Machine learning</topic><topic>Markov processes</topic><topic>Nesting</topic><topic>Nodes</topic><topic>Optimization</topic><topic>Physics - Quantum Physics</topic><topic>Policies</topic><topic>Quantum computing</topic><topic>Quantum entanglement</topic><toplevel>online_resources</toplevel><creatorcontrib>Haldar, Stav</creatorcontrib><creatorcontrib>Barge, Pratik J</creatorcontrib><creatorcontrib>Khatri, Sumeet</creatorcontrib><creatorcontrib>Hwang, Lee</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Haldar, Stav</au><au>Barge, Pratik J</au><au>Khatri, Sumeet</au><au>Hwang, Lee</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning</atitle><jtitle>arXiv.org</jtitle><date>2024-04-01</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of nodes, both homogeneous and inhomogeneous, that take practical limitations such as photon losses, non-ideal measurements, and quantum memories with short coherence times into account. For a wide range of parameters, our policies improve upon previously known policies, such as the "swap-as-soon-as-possible" policy, with respect to both the waiting time and the fidelity of the end-to-end entanglement. This improvement is greatest for the most practically relevant cases, namely, for short coherence times, high link losses, and highly asymmetric links. To obtain our results, we model entanglement distribution using a Markov decision process, and then we use the Q-learning reinforcement learning (RL) algorithm to discover new policies. These new policies are characterized by dynamic, state-dependent memory cutoffs and collaboration between the nodes. In particular, we quantify this collaboration between the nodes. Our quantifiers tell us how much "global" knowledge of the network every node has. Finally, our understanding of the performance of large quantum networks is currently limited by the computational inefficiency of simulating them using RL or other optimization methods. Thus, in this work, we present a method for nesting policies in order to obtain policies for large repeater chains. By nesting our RL-based policies for small repeater chains, we obtain policies for large repeater chains that improve upon the swap-as-soon-as-possible policy, and thus we pave the way for a scalable method for obtaining policies for long-distance entanglement distribution.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2303.00777</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2303_00777 |
source | arXiv.org; Free E- Journals |
subjects | Accuracy Algorithms Chains Coherence Collaboration Cooperation Machine learning Markov processes Nesting Nodes Optimization Physics - Quantum Physics Policies Quantum computing Quantum entanglement |
title | Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T03%3A36%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fast%20and%20reliable%20entanglement%20distribution%20with%20quantum%20repeaters:%20principles%20for%20improving%20protocols%20using%20reinforcement%20learning&rft.jtitle=arXiv.org&rft.au=Haldar,%20Stav&rft.date=2024-04-01&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2303.00777&rft_dat=%3Cproquest_arxiv%3E2782030767%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2782030767&rft_id=info:pmid/&rfr_iscdi=true |