OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models
Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operation...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Information Technology (IT) Operations (Ops), particularly Artificial
Intelligence for IT Operations (AIOps), is the guarantee for maintaining the
orderly and stable operation of existing information systems. According to
Gartner's prediction, the use of AI technology for automated IT operations has
become a new trend. Large language models (LLMs) that have exhibited remarkable
capabilities in NLP-related tasks, are showing great potential in the field of
AIOps, such as in aspects of root cause analysis of failures, generation of
operations and maintenance scripts, and summarizing of alert information.
Nevertheless, the performance of current LLMs in Ops tasks is yet to be
determined. In this paper, we present OpsEval, a comprehensive task-oriented
Ops benchmark designed for LLMs. For the first time, OpsEval assesses LLMs'
proficiency in various crucial scenarios at different ability levels. The
benchmark includes 7184 multi-choice questions and 1736 question-answering (QA)
formats in English and Chinese. By conducting a comprehensive performance
evaluation of the current leading large language models, we show how various
LLM techniques can affect the performance of Ops, and discussed findings
related to various topics, including model quantification, QA evaluation, and
hallucination issues. To ensure the credibility of our evaluation, we invite
dozens of domain experts to manually review our questions. At the same time, we
have open-sourced 20% of the test QA to assist current researchers in
preliminary evaluations of their OpsLLM models. The remaining 80% of the data,
which is not disclosed, is used to eliminate the issue of the test set leakage.
Additionally, we have constructed an online leaderboard that is updated in
real-time and will continue to be updated, ensuring that any newly emerging
LLMs will be evaluated promptly. Both our dataset and leaderboard have been
made public. |
---|---|
DOI: | 10.48550/arxiv.2310.07637 |