Are LLMs Any Good for High-Level Synthesis?

The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natur...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liao, Yuchao, Adegbija, Tosiron, Lysecky, Roman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liao, Yuchao
Adegbija, Tosiron
Lysecky, Roman
description The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natural language specifications and refactor code. We survey the current research and conduct experiments comparing Verilog designs generated by a standard HLS tool (Vitis HLS) with those produced by LLMs translating C code or natural language specifications. Our evaluation focuses on quantifying the impact on performance, power, and resource utilization, providing an assessment of the efficiency of LLM-based approaches. This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.
doi_str_mv 10.48550/arxiv.2408.10428
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_10428</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_10428</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_104283</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0MDGy4GTQdixKVfDx8S1WcMyrVHDPz09RSMsvUvDITM_Q9UktS81RCK7MK8lILc4studhYE1LzClO5YXS3Azybq4hzh66YGPjC4oycxOLKuNBxseDjTcmrAIAl_4tjA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Are LLMs Any Good for High-Level Synthesis?</title><source>arXiv.org</source><creator>Liao, Yuchao ; Adegbija, Tosiron ; Lysecky, Roman</creator><creatorcontrib>Liao, Yuchao ; Adegbija, Tosiron ; Lysecky, Roman</creatorcontrib><description>The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natural language specifications and refactor code. We survey the current research and conduct experiments comparing Verilog designs generated by a standard HLS tool (Vitis HLS) with those produced by LLMs translating C code or natural language specifications. Our evaluation focuses on quantifying the impact on performance, power, and resource utilization, providing an assessment of the efficiency of LLM-based approaches. This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.</description><identifier>DOI: 10.48550/arxiv.2408.10428</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Hardware Architecture</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.10428$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.10428$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liao, Yuchao</creatorcontrib><creatorcontrib>Adegbija, Tosiron</creatorcontrib><creatorcontrib>Lysecky, Roman</creatorcontrib><title>Are LLMs Any Good for High-Level Synthesis?</title><description>The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natural language specifications and refactor code. We survey the current research and conduct experiments comparing Verilog designs generated by a standard HLS tool (Vitis HLS) with those produced by LLMs translating C code or natural language specifications. Our evaluation focuses on quantifying the impact on performance, power, and resource utilization, providing an assessment of the efficiency of LLM-based approaches. This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Hardware Architecture</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0MDGy4GTQdixKVfDx8S1WcMyrVHDPz09RSMsvUvDITM_Q9UktS81RCK7MK8lILc4studhYE1LzClO5YXS3Azybq4hzh66YGPjC4oycxOLKuNBxseDjTcmrAIAl_4tjA</recordid><startdate>20240819</startdate><enddate>20240819</enddate><creator>Liao, Yuchao</creator><creator>Adegbija, Tosiron</creator><creator>Lysecky, Roman</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240819</creationdate><title>Are LLMs Any Good for High-Level Synthesis?</title><author>Liao, Yuchao ; Adegbija, Tosiron ; Lysecky, Roman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_104283</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Hardware Architecture</topic><toplevel>online_resources</toplevel><creatorcontrib>Liao, Yuchao</creatorcontrib><creatorcontrib>Adegbija, Tosiron</creatorcontrib><creatorcontrib>Lysecky, Roman</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liao, Yuchao</au><au>Adegbija, Tosiron</au><au>Lysecky, Roman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Are LLMs Any Good for High-Level Synthesis?</atitle><date>2024-08-19</date><risdate>2024</risdate><abstract>The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natural language specifications and refactor code. We survey the current research and conduct experiments comparing Verilog designs generated by a standard HLS tool (Vitis HLS) with those produced by LLMs translating C code or natural language specifications. Our evaluation focuses on quantifying the impact on performance, power, and resource utilization, providing an assessment of the efficiency of LLM-based approaches. This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.</abstract><doi>10.48550/arxiv.2408.10428</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.10428
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_10428
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Hardware Architecture
title Are LLMs Any Good for High-Level Synthesis?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T19%3A41%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Are%20LLMs%20Any%20Good%20for%20High-Level%20Synthesis?&rft.au=Liao,%20Yuchao&rft.date=2024-08-19&rft_id=info:doi/10.48550/arxiv.2408.10428&rft_dat=%3Carxiv_GOX%3E2408_10428%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true