ITERTL: An Iterative Framework for Fine-tuning LLMs for RTL Code Generation
Recently, large language models (LLMs) have demonstrated excellent performance in understanding human instructions and generating code, which has inspired researchers to explore the feasibility of generating RTL code with LLMs. However, the existing approaches to fine-tune LLMs on RTL codes typicall...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, large language models (LLMs) have demonstrated excellent
performance in understanding human instructions and generating code, which has
inspired researchers to explore the feasibility of generating RTL code with
LLMs. However, the existing approaches to fine-tune LLMs on RTL codes typically
are conducted on fixed datasets, which do not fully stimulate the capability of
LLMs and require large amounts of reference data. To mitigate these issues , we
introduce a simple yet effective iterative training paradigm named ITERTL.
During each iteration, samples are drawn from the model trained in the previous
cycle. Then these new samples are employed for training in this loop. Through
this iterative approach, the distribution mismatch between the model and the
training samples is reduced. Additionally, the model is thus enabled to explore
a broader generative space and receive more comprehensive feedback. Theoretical
analyses are conducted to investigate the mechanism of the effectiveness.
Experimental results show the model trained through our proposed approach can
compete with and even outperform the state-of-the-art (SOTA) open-source model
with nearly 37\% reference samples, achieving remarkable 42.9\% and 62.2\%
pass@1 rate on two VerilogEval evaluation datasets respectively. While using
the same amount of reference samples, our method can achieved a relative
improvement of 16.9\% and 12.5\% in pass@1 compared to the non-iterative
method. This study facilitates the application of LLMs for generating RTL code
in practical scenarios with limited data. |
---|---|
DOI: | 10.48550/arxiv.2407.12022 |