Generative Software Engineering
The rapid development of deep learning techniques, improved computational power, and the availability of vast training data have led to significant advancements in pre-trained models and large language models (LLMs). Pre-trained models based on architectures such as BERT and Transformer, as well as...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid development of deep learning techniques, improved computational
power, and the availability of vast training data have led to significant
advancements in pre-trained models and large language models (LLMs).
Pre-trained models based on architectures such as BERT and Transformer, as well
as LLMs like ChatGPT, have demonstrated remarkable language capabilities and
found applications in Software engineering. Software engineering tasks can be
divided into many categories, among which generative tasks are the most concern
by researchers, where pre-trained models and LLMs possess powerful language
representation and contextual awareness capabilities, enabling them to leverage
diverse training data and adapt to generative tasks through fine-tuning,
transfer learning, and prompt engineering. These advantages make them effective
tools in generative tasks and have demonstrated excellent performance. In this
paper, we present a comprehensive literature review of generative tasks in SE
using pre-trained models and LLMs. We accurately categorize SE generative tasks
based on software engineering methodologies and summarize the advanced
pre-trained models and LLMs involved, as well as the datasets and evaluation
metrics used. Additionally, we identify key strengths, weaknesses, and gaps in
existing approaches, and propose potential research directions. This review
aims to provide researchers and practitioners with an in-depth analysis and
guidance on the application of pre-trained models and LLMs in generative tasks
within SE. |
---|---|
DOI: | 10.48550/arxiv.2403.02583 |