Safety Analysis in the Era of Large Language Models: A Case Study of STPA using ChatGPT
Can safety analysis make use of Large Language Models (LLMs)? A case study explores Systems Theoretic Process Analysis (STPA) applied to Automatic Emergency Brake (AEB) and Electricity Demand Side Management (DSM) systems using ChatGPT. We investigate how collaboration schemes, input semantic comple...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Can safety analysis make use of Large Language Models (LLMs)? A case study
explores Systems Theoretic Process Analysis (STPA) applied to Automatic
Emergency Brake (AEB) and Electricity Demand Side Management (DSM) systems
using ChatGPT. We investigate how collaboration schemes, input semantic
complexity, and prompt guidelines influence STPA results. Comparative results
show that using ChatGPT without human intervention may be inadequate due to
reliability related issues, but with careful design, it may outperform human
experts. No statistically significant differences are found when varying the
input semantic complexity or using common prompt guidelines, which suggests the
necessity for developing domain-specific prompt engineering. We also highlight
future challenges, including concerns about LLM trustworthiness and the
necessity for standardisation and regulation in this domain. |
---|---|
DOI: | 10.48550/arxiv.2304.01246 |