A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting

Efficient exploration of design spaces is highly sought after in engineering applications. A spectrum of tools has been proposed to deal with the computational difficulties associated with such problems. In the context of our case study, these tools can be broadly classified into optimization and su...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of mechanical design (1990) 2019-11, Vol.141 (11)
Hauptverfasser: Lee, Xian Yeow, Balu, Aditya, Stoecklein, Daniel, Ganapathysubramanian, Baskar, Sarkar, Soumik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 11
container_start_page
container_title Journal of mechanical design (1990)
container_volume 141
creator Lee, Xian Yeow
Balu, Aditya
Stoecklein, Daniel
Ganapathysubramanian, Baskar
Sarkar, Soumik
description Efficient exploration of design spaces is highly sought after in engineering applications. A spectrum of tools has been proposed to deal with the computational difficulties associated with such problems. In the context of our case study, these tools can be broadly classified into optimization and supervised learning approaches. Optimization approaches, while successful, are inherently data inefficient, with evolutionary optimization-based methods being a good example. This inefficiency stems from data not being reused from previous design explorations. Alternately, supervised learning-based design paradigms are data efficient. However, the quality of ensuing solutions depends heavily on the quality of data available. Furthermore, it is difficult to incorporate physics models and domain knowledge aspects of design exploration into pure-learning-based methods. In this work, we formulate a reinforcement learning (RL)-based design framework that mitigates disadvantages of both approaches. Our framework simultaneously finds solutions that are more efficient compared with supervised learning approaches while using data more efficiently compared with genetic algorithm (GA)-based optimization approaches. We illustrate our framework on a problem of microfluidic device design for flow sculpting, and our results show that a single generic RL agent is capable of exploring the solution space to achieve multiple design objectives. Additionally, we demonstrate that the RL agent can be used to solve more complex problems using a targeted refinement step. Thus, we address the data efficiency limitation of optimization-based methods and the limited data problem of supervised learning-based methods. The versatility of our framework is illustrated by utilizing it to gain domain insights and to incorporate domain knowledge. We envision such RL frameworks to have an impact on design science.
doi_str_mv 10.1115/1.4044397
format Article
fullrecord <record><control><sourceid>asme_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1115_1_4044397</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>956254</sourcerecordid><originalsourceid>FETCH-LOGICAL-a354t-921036aac5d814fa11c4aedeb2f9a133bffb10e61166caa2fa42e6b27191fbf73</originalsourceid><addsrcrecordid>eNotkEtPwzAQhC0EEuVx4MzFVw4pXsdOGm5VH4BUhEThHDnOunKVOpGdgMqvx6U97e7om9VoCLkDNgYA-QhjwYRIi_yMjEDySVIwBudxZ5IlTOT8klyFsI0iTIQckd8pnamAdN0P9Z62hs4RO_qB1pnWa9yh6-kKlXfWbWiU6MJtrEP0h3uOwW7cE512XWO16m3raN_SN6t9a5rB1lZH5ttqDP_eZdP-0LUemq6P9htyYVQT8PY0r8nXcvE5e0lW78-vs-kqUakUfVJwYGmmlJb1BIRRAFoorLHiplCQppUxFTDMALJMK8WNEhyziudQgKlMnl6Th-PfmCoEj6bsvN0pvy-BlYfSSihPpUX2_siqsMNy2w7exWhlITMuRfoHxbFpTg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting</title><source>ASME Digital Library - Journals</source><source>Alma/SFX Local Collection</source><creator>Lee, Xian Yeow ; Balu, Aditya ; Stoecklein, Daniel ; Ganapathysubramanian, Baskar ; Sarkar, Soumik</creator><creatorcontrib>Lee, Xian Yeow ; Balu, Aditya ; Stoecklein, Daniel ; Ganapathysubramanian, Baskar ; Sarkar, Soumik</creatorcontrib><description>Efficient exploration of design spaces is highly sought after in engineering applications. A spectrum of tools has been proposed to deal with the computational difficulties associated with such problems. In the context of our case study, these tools can be broadly classified into optimization and supervised learning approaches. Optimization approaches, while successful, are inherently data inefficient, with evolutionary optimization-based methods being a good example. This inefficiency stems from data not being reused from previous design explorations. Alternately, supervised learning-based design paradigms are data efficient. However, the quality of ensuing solutions depends heavily on the quality of data available. Furthermore, it is difficult to incorporate physics models and domain knowledge aspects of design exploration into pure-learning-based methods. In this work, we formulate a reinforcement learning (RL)-based design framework that mitigates disadvantages of both approaches. Our framework simultaneously finds solutions that are more efficient compared with supervised learning approaches while using data more efficiently compared with genetic algorithm (GA)-based optimization approaches. We illustrate our framework on a problem of microfluidic device design for flow sculpting, and our results show that a single generic RL agent is capable of exploring the solution space to achieve multiple design objectives. Additionally, we demonstrate that the RL agent can be used to solve more complex problems using a targeted refinement step. Thus, we address the data efficiency limitation of optimization-based methods and the limited data problem of supervised learning-based methods. The versatility of our framework is illustrated by utilizing it to gain domain insights and to incorporate domain knowledge. We envision such RL frameworks to have an impact on design science.</description><identifier>ISSN: 1050-0472</identifier><identifier>EISSN: 1528-9001</identifier><identifier>DOI: 10.1115/1.4044397</identifier><language>eng</language><publisher>ASME</publisher><subject>Design Automation</subject><ispartof>Journal of mechanical design (1990), 2019-11, Vol.141 (11)</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a354t-921036aac5d814fa11c4aedeb2f9a133bffb10e61166caa2fa42e6b27191fbf73</citedby><cites>FETCH-LOGICAL-a354t-921036aac5d814fa11c4aedeb2f9a133bffb10e61166caa2fa42e6b27191fbf73</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902,38497</link.rule.ids></links><search><creatorcontrib>Lee, Xian Yeow</creatorcontrib><creatorcontrib>Balu, Aditya</creatorcontrib><creatorcontrib>Stoecklein, Daniel</creatorcontrib><creatorcontrib>Ganapathysubramanian, Baskar</creatorcontrib><creatorcontrib>Sarkar, Soumik</creatorcontrib><title>A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting</title><title>Journal of mechanical design (1990)</title><addtitle>J. Mech. Des</addtitle><description>Efficient exploration of design spaces is highly sought after in engineering applications. A spectrum of tools has been proposed to deal with the computational difficulties associated with such problems. In the context of our case study, these tools can be broadly classified into optimization and supervised learning approaches. Optimization approaches, while successful, are inherently data inefficient, with evolutionary optimization-based methods being a good example. This inefficiency stems from data not being reused from previous design explorations. Alternately, supervised learning-based design paradigms are data efficient. However, the quality of ensuing solutions depends heavily on the quality of data available. Furthermore, it is difficult to incorporate physics models and domain knowledge aspects of design exploration into pure-learning-based methods. In this work, we formulate a reinforcement learning (RL)-based design framework that mitigates disadvantages of both approaches. Our framework simultaneously finds solutions that are more efficient compared with supervised learning approaches while using data more efficiently compared with genetic algorithm (GA)-based optimization approaches. We illustrate our framework on a problem of microfluidic device design for flow sculpting, and our results show that a single generic RL agent is capable of exploring the solution space to achieve multiple design objectives. Additionally, we demonstrate that the RL agent can be used to solve more complex problems using a targeted refinement step. Thus, we address the data efficiency limitation of optimization-based methods and the limited data problem of supervised learning-based methods. The versatility of our framework is illustrated by utilizing it to gain domain insights and to incorporate domain knowledge. We envision such RL frameworks to have an impact on design science.</description><subject>Design Automation</subject><issn>1050-0472</issn><issn>1528-9001</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNotkEtPwzAQhC0EEuVx4MzFVw4pXsdOGm5VH4BUhEThHDnOunKVOpGdgMqvx6U97e7om9VoCLkDNgYA-QhjwYRIi_yMjEDySVIwBudxZ5IlTOT8klyFsI0iTIQckd8pnamAdN0P9Z62hs4RO_qB1pnWa9yh6-kKlXfWbWiU6MJtrEP0h3uOwW7cE512XWO16m3raN_SN6t9a5rB1lZH5ttqDP_eZdP-0LUemq6P9htyYVQT8PY0r8nXcvE5e0lW78-vs-kqUakUfVJwYGmmlJb1BIRRAFoorLHiplCQppUxFTDMALJMK8WNEhyziudQgKlMnl6Th-PfmCoEj6bsvN0pvy-BlYfSSihPpUX2_siqsMNy2w7exWhlITMuRfoHxbFpTg</recordid><startdate>20191101</startdate><enddate>20191101</enddate><creator>Lee, Xian Yeow</creator><creator>Balu, Aditya</creator><creator>Stoecklein, Daniel</creator><creator>Ganapathysubramanian, Baskar</creator><creator>Sarkar, Soumik</creator><general>ASME</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20191101</creationdate><title>A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting</title><author>Lee, Xian Yeow ; Balu, Aditya ; Stoecklein, Daniel ; Ganapathysubramanian, Baskar ; Sarkar, Soumik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a354t-921036aac5d814fa11c4aedeb2f9a133bffb10e61166caa2fa42e6b27191fbf73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Design Automation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Xian Yeow</creatorcontrib><creatorcontrib>Balu, Aditya</creatorcontrib><creatorcontrib>Stoecklein, Daniel</creatorcontrib><creatorcontrib>Ganapathysubramanian, Baskar</creatorcontrib><creatorcontrib>Sarkar, Soumik</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of mechanical design (1990)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Xian Yeow</au><au>Balu, Aditya</au><au>Stoecklein, Daniel</au><au>Ganapathysubramanian, Baskar</au><au>Sarkar, Soumik</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting</atitle><jtitle>Journal of mechanical design (1990)</jtitle><stitle>J. Mech. Des</stitle><date>2019-11-01</date><risdate>2019</risdate><volume>141</volume><issue>11</issue><issn>1050-0472</issn><eissn>1528-9001</eissn><abstract>Efficient exploration of design spaces is highly sought after in engineering applications. A spectrum of tools has been proposed to deal with the computational difficulties associated with such problems. In the context of our case study, these tools can be broadly classified into optimization and supervised learning approaches. Optimization approaches, while successful, are inherently data inefficient, with evolutionary optimization-based methods being a good example. This inefficiency stems from data not being reused from previous design explorations. Alternately, supervised learning-based design paradigms are data efficient. However, the quality of ensuing solutions depends heavily on the quality of data available. Furthermore, it is difficult to incorporate physics models and domain knowledge aspects of design exploration into pure-learning-based methods. In this work, we formulate a reinforcement learning (RL)-based design framework that mitigates disadvantages of both approaches. Our framework simultaneously finds solutions that are more efficient compared with supervised learning approaches while using data more efficiently compared with genetic algorithm (GA)-based optimization approaches. We illustrate our framework on a problem of microfluidic device design for flow sculpting, and our results show that a single generic RL agent is capable of exploring the solution space to achieve multiple design objectives. Additionally, we demonstrate that the RL agent can be used to solve more complex problems using a targeted refinement step. Thus, we address the data efficiency limitation of optimization-based methods and the limited data problem of supervised learning-based methods. The versatility of our framework is illustrated by utilizing it to gain domain insights and to incorporate domain knowledge. We envision such RL frameworks to have an impact on design science.</abstract><pub>ASME</pub><doi>10.1115/1.4044397</doi></addata></record>
fulltext fulltext
identifier ISSN: 1050-0472
ispartof Journal of mechanical design (1990), 2019-11, Vol.141 (11)
issn 1050-0472
1528-9001
language eng
recordid cdi_crossref_primary_10_1115_1_4044397
source ASME Digital Library - Journals; Alma/SFX Local Collection
subjects Design Automation
title A Case Study of Deep Reinforcement Learning for Engineering Design: Application to Microfluidic Devices for Flow Sculpting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T22%3A20%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-asme_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Case%20Study%20of%20Deep%20Reinforcement%20Learning%20for%20Engineering%20Design:%20Application%20to%20Microfluidic%20Devices%20for%20Flow%20Sculpting&rft.jtitle=Journal%20of%20mechanical%20design%20(1990)&rft.au=Lee,%20Xian%20Yeow&rft.date=2019-11-01&rft.volume=141&rft.issue=11&rft.issn=1050-0472&rft.eissn=1528-9001&rft_id=info:doi/10.1115/1.4044397&rft_dat=%3Casme_cross%3E956254%3C/asme_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true