Are LLMs Any Good for High-Level Synthesis?
The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natur...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The increasing complexity and demand for faster, energy-efficient hardware
designs necessitate innovative High-Level Synthesis (HLS) methodologies. This
paper explores the potential of Large Language Models (LLMs) to streamline or
replace the HLS process, leveraging their ability to understand natural
language specifications and refactor code. We survey the current research and
conduct experiments comparing Verilog designs generated by a standard HLS tool
(Vitis HLS) with those produced by LLMs translating C code or natural language
specifications. Our evaluation focuses on quantifying the impact on
performance, power, and resource utilization, providing an assessment of the
efficiency of LLM-based approaches. This study aims to illuminate the role of
LLMs in HLS, identifying promising directions for optimized hardware design in
applications such as AI acceleration, embedded systems, and high-performance
computing. |
---|---|
DOI: | 10.48550/arxiv.2408.10428 |