Towards a benchmark dataset for large language models in the context of process automation
The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effe...
Gespeichert in:
Veröffentlicht in: | Digital Chemical Engineering 2024-12, Vol.13, p.100186, Article 100186 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation. |
---|---|
ISSN: | 2772-5081 2772-5081 |
DOI: | 10.1016/j.dche.2024.100186 |