Delving into Macro Placement with Reinforcement Learning

In physical design, human designers typically place macros via trial and error, which is a Markov decision process. Reinforcement learning (RL) methods have demonstrated superhuman performance on the macro placement. In this paper, we propose an extension to this prior work (Mirhoseini et al., 2020)...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jiang, Zixuan, Songhori, Ebrahim, Wang, Shen, Goldie, Anna, Mirhoseini, Azalia, Jiang, Joe, Lee, Young-Joon, Pan, David Z
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jiang, Zixuan
Songhori, Ebrahim
Wang, Shen
Goldie, Anna
Mirhoseini, Azalia
Jiang, Joe
Lee, Young-Joon
Pan, David Z
description In physical design, human designers typically place macros via trial and error, which is a Markov decision process. Reinforcement learning (RL) methods have demonstrated superhuman performance on the macro placement. In this paper, we propose an extension to this prior work (Mirhoseini et al., 2020). We first describe the details of the policy and value network architecture. We replace the force-directed method with DREAMPlace for placing standard cells in the RL environment. We also compare our improved method with other academic placers on public benchmarks.
doi_str_mv 10.48550/arxiv.2109.02587
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2109_02587</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2109_02587</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-5ce0559137904814f7b215077ac6e56c1e036369a42b85e717b9ea25f750edfe3</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QGt-enOTpYy_UFFk9uU23migk0oso769Os7qwIHzcT4hzrRqOw-gLqh-5V1rtAqtMuDxWPgrnna5vMpcllk-UKyzfJoo8pbLIj_z8iafOZc010PVM9XyOzgRR4mmDz495Epsbq4367umf7y9X1_2DTnEBiIrgKAtBtV53SUcjQaFSNExuKhZWWddoM6MHhg1joHJQEJQ_JLYrsT5P3Z_fXiveUv1e_hTGPYK9gdJjEAf</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Delving into Macro Placement with Reinforcement Learning</title><source>arXiv.org</source><creator>Jiang, Zixuan ; Songhori, Ebrahim ; Wang, Shen ; Goldie, Anna ; Mirhoseini, Azalia ; Jiang, Joe ; Lee, Young-Joon ; Pan, David Z</creator><creatorcontrib>Jiang, Zixuan ; Songhori, Ebrahim ; Wang, Shen ; Goldie, Anna ; Mirhoseini, Azalia ; Jiang, Joe ; Lee, Young-Joon ; Pan, David Z</creatorcontrib><description>In physical design, human designers typically place macros via trial and error, which is a Markov decision process. Reinforcement learning (RL) methods have demonstrated superhuman performance on the macro placement. In this paper, we propose an extension to this prior work (Mirhoseini et al., 2020). We first describe the details of the policy and value network architecture. We replace the force-directed method with DREAMPlace for placing standard cells in the RL environment. We also compare our improved method with other academic placers on public benchmarks.</description><identifier>DOI: 10.48550/arxiv.2109.02587</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2021-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2109.02587$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2109.02587$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Zixuan</creatorcontrib><creatorcontrib>Songhori, Ebrahim</creatorcontrib><creatorcontrib>Wang, Shen</creatorcontrib><creatorcontrib>Goldie, Anna</creatorcontrib><creatorcontrib>Mirhoseini, Azalia</creatorcontrib><creatorcontrib>Jiang, Joe</creatorcontrib><creatorcontrib>Lee, Young-Joon</creatorcontrib><creatorcontrib>Pan, David Z</creatorcontrib><title>Delving into Macro Placement with Reinforcement Learning</title><description>In physical design, human designers typically place macros via trial and error, which is a Markov decision process. Reinforcement learning (RL) methods have demonstrated superhuman performance on the macro placement. In this paper, we propose an extension to this prior work (Mirhoseini et al., 2020). We first describe the details of the policy and value network architecture. We replace the force-directed method with DREAMPlace for placing standard cells in the RL environment. We also compare our improved method with other academic placers on public benchmarks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QGt-enOTpYy_UFFk9uU23migk0oso769Os7qwIHzcT4hzrRqOw-gLqh-5V1rtAqtMuDxWPgrnna5vMpcllk-UKyzfJoo8pbLIj_z8iafOZc010PVM9XyOzgRR4mmDz495Epsbq4367umf7y9X1_2DTnEBiIrgKAtBtV53SUcjQaFSNExuKhZWWddoM6MHhg1joHJQEJQ_JLYrsT5P3Z_fXiveUv1e_hTGPYK9gdJjEAf</recordid><startdate>20210906</startdate><enddate>20210906</enddate><creator>Jiang, Zixuan</creator><creator>Songhori, Ebrahim</creator><creator>Wang, Shen</creator><creator>Goldie, Anna</creator><creator>Mirhoseini, Azalia</creator><creator>Jiang, Joe</creator><creator>Lee, Young-Joon</creator><creator>Pan, David Z</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210906</creationdate><title>Delving into Macro Placement with Reinforcement Learning</title><author>Jiang, Zixuan ; Songhori, Ebrahim ; Wang, Shen ; Goldie, Anna ; Mirhoseini, Azalia ; Jiang, Joe ; Lee, Young-Joon ; Pan, David Z</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-5ce0559137904814f7b215077ac6e56c1e036369a42b85e717b9ea25f750edfe3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Zixuan</creatorcontrib><creatorcontrib>Songhori, Ebrahim</creatorcontrib><creatorcontrib>Wang, Shen</creatorcontrib><creatorcontrib>Goldie, Anna</creatorcontrib><creatorcontrib>Mirhoseini, Azalia</creatorcontrib><creatorcontrib>Jiang, Joe</creatorcontrib><creatorcontrib>Lee, Young-Joon</creatorcontrib><creatorcontrib>Pan, David Z</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jiang, Zixuan</au><au>Songhori, Ebrahim</au><au>Wang, Shen</au><au>Goldie, Anna</au><au>Mirhoseini, Azalia</au><au>Jiang, Joe</au><au>Lee, Young-Joon</au><au>Pan, David Z</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Delving into Macro Placement with Reinforcement Learning</atitle><date>2021-09-06</date><risdate>2021</risdate><abstract>In physical design, human designers typically place macros via trial and error, which is a Markov decision process. Reinforcement learning (RL) methods have demonstrated superhuman performance on the macro placement. In this paper, we propose an extension to this prior work (Mirhoseini et al., 2020). We first describe the details of the policy and value network architecture. We replace the force-directed method with DREAMPlace for placing standard cells in the RL environment. We also compare our improved method with other academic placers on public benchmarks.</abstract><doi>10.48550/arxiv.2109.02587</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2109.02587
ispartof
issn
language eng
recordid cdi_arxiv_primary_2109_02587
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Delving into Macro Placement with Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T21%3A44%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Delving%20into%20Macro%20Placement%20with%20Reinforcement%20Learning&rft.au=Jiang,%20Zixuan&rft.date=2021-09-06&rft_id=info:doi/10.48550/arxiv.2109.02587&rft_dat=%3Carxiv_GOX%3E2109_02587%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true