Generating Test Scenarios from NL Requirements using Retrieval-Augmented LLMs: An Industrial Study
Test scenarios are specific instances of test cases that describe actions to validate a particular software functionality. By outlining the conditions under which the software operates and the expected outcomes, test scenarios ensure that the software functionality is tested in an integrated manner....
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Test scenarios are specific instances of test cases that describe actions to
validate a particular software functionality. By outlining the conditions under
which the software operates and the expected outcomes, test scenarios ensure
that the software functionality is tested in an integrated manner. Test
scenarios are crucial for systematically testing an application under various
conditions, including edge cases, to identify potential issues and guarantee
overall performance and reliability. Specifying test scenarios is tedious and
requires a deep understanding of software functionality and the underlying
domain. It further demands substantial effort and investment from already time-
and budget-constrained requirements engineers and testing teams. This paper
presents an automated approach (RAGTAG) for test scenario generation using
Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs). RAG
allows the integration of specific domain knowledge with LLMs' generation
capabilities. We evaluate RAGTAG on two industrial projects from Austrian Post
with bilingual requirements in German and English. Our results from an
interview survey conducted with four experts on five dimensions -- relevance,
coverage, correctness, coherence and feasibility, affirm the potential of
RAGTAG in automating test scenario generation. Specifically, our results
indicate that, despite the difficult task of analyzing bilingual requirements,
RAGTAG is able to produce scenarios that are well-aligned with the underlying
requirements and provide coverage of different aspects of the intended
functionality. The generated scenarios are easily understandable to experts and
feasible for testing in the project environment. The overall correctness is
deemed satisfactory; however, gaps in capturing exact action sequences and
domain nuances remain, underscoring the need for domain expertise when applying
LLMs. |
---|---|
DOI: | 10.48550/arxiv.2404.12772 |