Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach
Deep Reinforcement Learning (DRL) has recently been proposed as a methodology to discover complex active flow control strategies [Rabault et al., “Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control,” J. Fluid Mech. 865, 281–302...
Gespeichert in:
Veröffentlicht in: | Physics of fluids (1994) 2019-09, Vol.31 (9) |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 9 |
container_start_page | |
container_title | Physics of fluids (1994) |
container_volume | 31 |
creator | Rabault, Jean Kuhnle, Alexander |
description | Deep Reinforcement Learning (DRL) has recently been proposed as a methodology to discover complex active flow control strategies [Rabault et al., “Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control,” J. Fluid Mech. 865, 281–302 (2019)]. However, while promising results were obtained on a simple 2-dimensional benchmark flow at a moderate Reynolds number, considerable speedups will be required to investigate more challenging flow configurations. In the case of DRL trained with Computational Fluid Dynamics (CFD) data, it was found that the CFD part, rather than training the artificial neural network, was the limiting factor for speed of execution. Therefore, speedups should be obtained through a combination of two approaches. The first one, which is well documented in the literature, is to parallelize the numerical simulation itself. The second one is to adapt the DRL algorithm for parallelization. Here, a simple strategy is to use several independent simulations running in parallel to collect experiences faster. In the present work, we discuss this solution for parallelization. We illustrate that perfect speedups can be obtained up to the batch size of the DRL agent, and slightly suboptimal scaling still takes place for an even larger number of simulations. This is, therefore, an important step toward enabling the study of more sophisticated fluid mechanics problems through DRL. |
doi_str_mv | 10.1063/1.5116415 |
format | Article |
fullrecord | <record><control><sourceid>proquest_scita</sourceid><recordid>TN_cdi_scitation_primary_10_1063_1_5116415</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2297511744</sourcerecordid><originalsourceid>FETCH-LOGICAL-c417t-3cc8fcb1006d90b05e6f5c6bb7343135a29e3bbc48128cea7da7eed6be54d1403</originalsourceid><addsrcrecordid>eNqdkE1LAzEQhoMoWKsHf4EBTwpbk0026R5L8QsKXvQcstnZ7pZtsibZiv_e1CrePc3APPMy8yB0ScmMEsHu6KygVHBaHKEJJfMyk0KI430vSSYEo6foLIQNIYSVuZigdmEM9OB17Owa1wAD9tDZxnkDW7AR96C93c9CTBCsOwjYNbjp3Qc2zkbvehxb78Z1izXejn3sMrC7zjv7va-HwTtt2nN00ug-wMVPnaK3h_vX5VO2enl8Xi5WmeFUxowZM29MRQkRdUkqUoBoCiOqSjLOKCt0XgKrKsPnNJ8b0LLWEqAWFRS8ppywKbo65BrfhfSUss5rlVQUuZKciSIR1wciHfY-Qohq40Zv01Eqz0uZ_EnOE3Xzm-NC8NCowXdb7T9TltqrVlT9qE7s7YENpovJpLP_g3fO_4FqqBv2Bd3ajW8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2297511744</pqid></control><display><type>article</type><title>Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach</title><source>NORA - Norwegian Open Research Archives</source><source>AIP Journals Complete</source><source>Alma/SFX Local Collection</source><creator>Rabault, Jean ; Kuhnle, Alexander</creator><creatorcontrib>Rabault, Jean ; Kuhnle, Alexander</creatorcontrib><description>Deep Reinforcement Learning (DRL) has recently been proposed as a methodology to discover complex active flow control strategies [Rabault et al., “Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control,” J. Fluid Mech. 865, 281–302 (2019)]. However, while promising results were obtained on a simple 2-dimensional benchmark flow at a moderate Reynolds number, considerable speedups will be required to investigate more challenging flow configurations. In the case of DRL trained with Computational Fluid Dynamics (CFD) data, it was found that the CFD part, rather than training the artificial neural network, was the limiting factor for speed of execution. Therefore, speedups should be obtained through a combination of two approaches. The first one, which is well documented in the literature, is to parallelize the numerical simulation itself. The second one is to adapt the DRL algorithm for parallelization. Here, a simple strategy is to use several independent simulations running in parallel to collect experiences faster. In the present work, we discuss this solution for parallelization. We illustrate that perfect speedups can be obtained up to the batch size of the DRL agent, and slightly suboptimal scaling still takes place for an even larger number of simulations. This is, therefore, an important step toward enabling the study of more sophisticated fluid mechanics problems through DRL.</description><identifier>ISSN: 1070-6631</identifier><identifier>EISSN: 1089-7666</identifier><identifier>DOI: 10.1063/1.5116415</identifier><identifier>CODEN: PHFLE6</identifier><language>eng</language><publisher>Melville: American Institute of Physics</publisher><subject>Active control ; Algorithms ; Artificial neural networks ; Computational fluid dynamics ; Computer simulation ; Flow control ; Fluid dynamics ; Fluid flow ; Fluid mechanics ; Machine learning ; Neural networks ; Parallel processing ; Physics ; Reynolds number ; Two dimensional flow</subject><ispartof>Physics of fluids (1994), 2019-09, Vol.31 (9)</ispartof><rights>Author(s)</rights><rights>2019 Author(s). Published under license by AIP Publishing.</rights><rights>info:eu-repo/semantics/openAccess</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c417t-3cc8fcb1006d90b05e6f5c6bb7343135a29e3bbc48128cea7da7eed6be54d1403</citedby><cites>FETCH-LOGICAL-c417t-3cc8fcb1006d90b05e6f5c6bb7343135a29e3bbc48128cea7da7eed6be54d1403</cites><orcidid>0000-0002-7244-6592</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,794,885,4512,26567,27924,27925</link.rule.ids></links><search><creatorcontrib>Rabault, Jean</creatorcontrib><creatorcontrib>Kuhnle, Alexander</creatorcontrib><title>Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach</title><title>Physics of fluids (1994)</title><description>Deep Reinforcement Learning (DRL) has recently been proposed as a methodology to discover complex active flow control strategies [Rabault et al., “Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control,” J. Fluid Mech. 865, 281–302 (2019)]. However, while promising results were obtained on a simple 2-dimensional benchmark flow at a moderate Reynolds number, considerable speedups will be required to investigate more challenging flow configurations. In the case of DRL trained with Computational Fluid Dynamics (CFD) data, it was found that the CFD part, rather than training the artificial neural network, was the limiting factor for speed of execution. Therefore, speedups should be obtained through a combination of two approaches. The first one, which is well documented in the literature, is to parallelize the numerical simulation itself. The second one is to adapt the DRL algorithm for parallelization. Here, a simple strategy is to use several independent simulations running in parallel to collect experiences faster. In the present work, we discuss this solution for parallelization. We illustrate that perfect speedups can be obtained up to the batch size of the DRL agent, and slightly suboptimal scaling still takes place for an even larger number of simulations. This is, therefore, an important step toward enabling the study of more sophisticated fluid mechanics problems through DRL.</description><subject>Active control</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Computational fluid dynamics</subject><subject>Computer simulation</subject><subject>Flow control</subject><subject>Fluid dynamics</subject><subject>Fluid flow</subject><subject>Fluid mechanics</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Parallel processing</subject><subject>Physics</subject><subject>Reynolds number</subject><subject>Two dimensional flow</subject><issn>1070-6631</issn><issn>1089-7666</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>3HK</sourceid><recordid>eNqdkE1LAzEQhoMoWKsHf4EBTwpbk0026R5L8QsKXvQcstnZ7pZtsibZiv_e1CrePc3APPMy8yB0ScmMEsHu6KygVHBaHKEJJfMyk0KI430vSSYEo6foLIQNIYSVuZigdmEM9OB17Owa1wAD9tDZxnkDW7AR96C93c9CTBCsOwjYNbjp3Qc2zkbvehxb78Z1izXejn3sMrC7zjv7va-HwTtt2nN00ug-wMVPnaK3h_vX5VO2enl8Xi5WmeFUxowZM29MRQkRdUkqUoBoCiOqSjLOKCt0XgKrKsPnNJ8b0LLWEqAWFRS8ppywKbo65BrfhfSUss5rlVQUuZKciSIR1wciHfY-Qohq40Zv01Eqz0uZ_EnOE3Xzm-NC8NCowXdb7T9TltqrVlT9qE7s7YENpovJpLP_g3fO_4FqqBv2Bd3ajW8</recordid><startdate>20190901</startdate><enddate>20190901</enddate><creator>Rabault, Jean</creator><creator>Kuhnle, Alexander</creator><general>American Institute of Physics</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope><scope>3HK</scope><orcidid>https://orcid.org/0000-0002-7244-6592</orcidid></search><sort><creationdate>20190901</creationdate><title>Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach</title><author>Rabault, Jean ; Kuhnle, Alexander</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c417t-3cc8fcb1006d90b05e6f5c6bb7343135a29e3bbc48128cea7da7eed6be54d1403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Active control</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Computational fluid dynamics</topic><topic>Computer simulation</topic><topic>Flow control</topic><topic>Fluid dynamics</topic><topic>Fluid flow</topic><topic>Fluid mechanics</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Parallel processing</topic><topic>Physics</topic><topic>Reynolds number</topic><topic>Two dimensional flow</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rabault, Jean</creatorcontrib><creatorcontrib>Kuhnle, Alexander</creatorcontrib><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>NORA - Norwegian Open Research Archives</collection><jtitle>Physics of fluids (1994)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rabault, Jean</au><au>Kuhnle, Alexander</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach</atitle><jtitle>Physics of fluids (1994)</jtitle><date>2019-09-01</date><risdate>2019</risdate><volume>31</volume><issue>9</issue><issn>1070-6631</issn><eissn>1089-7666</eissn><coden>PHFLE6</coden><abstract>Deep Reinforcement Learning (DRL) has recently been proposed as a methodology to discover complex active flow control strategies [Rabault et al., “Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control,” J. Fluid Mech. 865, 281–302 (2019)]. However, while promising results were obtained on a simple 2-dimensional benchmark flow at a moderate Reynolds number, considerable speedups will be required to investigate more challenging flow configurations. In the case of DRL trained with Computational Fluid Dynamics (CFD) data, it was found that the CFD part, rather than training the artificial neural network, was the limiting factor for speed of execution. Therefore, speedups should be obtained through a combination of two approaches. The first one, which is well documented in the literature, is to parallelize the numerical simulation itself. The second one is to adapt the DRL algorithm for parallelization. Here, a simple strategy is to use several independent simulations running in parallel to collect experiences faster. In the present work, we discuss this solution for parallelization. We illustrate that perfect speedups can be obtained up to the batch size of the DRL agent, and slightly suboptimal scaling still takes place for an even larger number of simulations. This is, therefore, an important step toward enabling the study of more sophisticated fluid mechanics problems through DRL.</abstract><cop>Melville</cop><pub>American Institute of Physics</pub><doi>10.1063/1.5116415</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-7244-6592</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1070-6631 |
ispartof | Physics of fluids (1994), 2019-09, Vol.31 (9) |
issn | 1070-6631 1089-7666 |
language | eng |
recordid | cdi_scitation_primary_10_1063_1_5116415 |
source | NORA - Norwegian Open Research Archives; AIP Journals Complete; Alma/SFX Local Collection |
subjects | Active control Algorithms Artificial neural networks Computational fluid dynamics Computer simulation Flow control Fluid dynamics Fluid flow Fluid mechanics Machine learning Neural networks Parallel processing Physics Reynolds number Two dimensional flow |
title | Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T18%3A18%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_scita&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Accelerating%20deep%20reinforcement%20learning%20strategies%20of%20flow%20control%20through%20a%20multi-environment%20approach&rft.jtitle=Physics%20of%20fluids%20(1994)&rft.au=Rabault,%20Jean&rft.date=2019-09-01&rft.volume=31&rft.issue=9&rft.issn=1070-6631&rft.eissn=1089-7666&rft.coden=PHFLE6&rft_id=info:doi/10.1063/1.5116415&rft_dat=%3Cproquest_scita%3E2297511744%3C/proquest_scita%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2297511744&rft_id=info:pmid/&rfr_iscdi=true |