OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models

Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Yuhe, Pei, Changhua, Xu, Longlong, Chen, Bohan, Sun, Mingze, Zhang, Zhirui, Sun, Yongqian, Zhang, Shenglin, Wang, Kun, Zhang, Haiming, Li, Jianhui, Xie, Gaogang, Wen, Xidao, Nie, Xiaohui, Ma, Minghua, Pei, Dan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Yuhe
Pei, Changhua
Xu, Longlong
Chen, Bohan
Sun, Mingze
Zhang, Zhirui
Sun, Yongqian
Zhang, Shenglin
Wang, Kun
Zhang, Haiming
Li, Jianhui
Xie, Gaogang
Wen, Xidao
Nie, Xiaohui
Ma, Minghua
Pei, Dan
description Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operations has become a new trend. Large language models (LLMs) that have exhibited remarkable capabilities in NLP-related tasks, are showing great potential in the field of AIOps, such as in aspects of root cause analysis of failures, generation of operations and maintenance scripts, and summarizing of alert information. Nevertheless, the performance of current LLMs in Ops tasks is yet to be determined. In this paper, we present OpsEval, a comprehensive task-oriented Ops benchmark designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in various crucial scenarios at different ability levels. The benchmark includes 7184 multi-choice questions and 1736 question-answering (QA) formats in English and Chinese. By conducting a comprehensive performance evaluation of the current leading large language models, we show how various LLM techniques can affect the performance of Ops, and discussed findings related to various topics, including model quantification, QA evaluation, and hallucination issues. To ensure the credibility of our evaluation, we invite dozens of domain experts to manually review our questions. At the same time, we have open-sourced 20% of the test QA to assist current researchers in preliminary evaluations of their OpsLLM models. The remaining 80% of the data, which is not disclosed, is used to eliminate the issue of the test set leakage. Additionally, we have constructed an online leaderboard that is updated in real-time and will continue to be updated, ensuring that any newly emerging LLMs will be evaluated promptly. Both our dataset and leaderboard have been made public.
doi_str_mv 10.48550/arxiv.2310.07637
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_07637</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_07637</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2310_076373</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgYKGJibGZtzMgT4FxS7liXmWCk4Kjjn5xYUpWak5hVnlqUqeIYo-BekFiWWZObnFSs4peYlZ-QmFmUrBJdmlqQqpOUXKfgkFqWnAsm89NJEIMM3PyU1p5iHgTUtMac4lRdKczPIu7mGOHvogu2OLyjKBJpSGQ9yQzzYDcaEVQAAhxk7uA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models</title><source>arXiv.org</source><creator>Liu, Yuhe ; Pei, Changhua ; Xu, Longlong ; Chen, Bohan ; Sun, Mingze ; Zhang, Zhirui ; Sun, Yongqian ; Zhang, Shenglin ; Wang, Kun ; Zhang, Haiming ; Li, Jianhui ; Xie, Gaogang ; Wen, Xidao ; Nie, Xiaohui ; Ma, Minghua ; Pei, Dan</creator><creatorcontrib>Liu, Yuhe ; Pei, Changhua ; Xu, Longlong ; Chen, Bohan ; Sun, Mingze ; Zhang, Zhirui ; Sun, Yongqian ; Zhang, Shenglin ; Wang, Kun ; Zhang, Haiming ; Li, Jianhui ; Xie, Gaogang ; Wen, Xidao ; Nie, Xiaohui ; Ma, Minghua ; Pei, Dan</creatorcontrib><description>Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operations has become a new trend. Large language models (LLMs) that have exhibited remarkable capabilities in NLP-related tasks, are showing great potential in the field of AIOps, such as in aspects of root cause analysis of failures, generation of operations and maintenance scripts, and summarizing of alert information. Nevertheless, the performance of current LLMs in Ops tasks is yet to be determined. In this paper, we present OpsEval, a comprehensive task-oriented Ops benchmark designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in various crucial scenarios at different ability levels. The benchmark includes 7184 multi-choice questions and 1736 question-answering (QA) formats in English and Chinese. By conducting a comprehensive performance evaluation of the current leading large language models, we show how various LLM techniques can affect the performance of Ops, and discussed findings related to various topics, including model quantification, QA evaluation, and hallucination issues. To ensure the credibility of our evaluation, we invite dozens of domain experts to manually review our questions. At the same time, we have open-sourced 20% of the test QA to assist current researchers in preliminary evaluations of their OpsLLM models. The remaining 80% of the data, which is not disclosed, is used to eliminate the issue of the test set leakage. Additionally, we have constructed an online leaderboard that is updated in real-time and will continue to be updated, ensuring that any newly emerging LLMs will be evaluated promptly. Both our dataset and leaderboard have been made public.</description><identifier>DOI: 10.48550/arxiv.2310.07637</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Networking and Internet Architecture</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.07637$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.07637$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Yuhe</creatorcontrib><creatorcontrib>Pei, Changhua</creatorcontrib><creatorcontrib>Xu, Longlong</creatorcontrib><creatorcontrib>Chen, Bohan</creatorcontrib><creatorcontrib>Sun, Mingze</creatorcontrib><creatorcontrib>Zhang, Zhirui</creatorcontrib><creatorcontrib>Sun, Yongqian</creatorcontrib><creatorcontrib>Zhang, Shenglin</creatorcontrib><creatorcontrib>Wang, Kun</creatorcontrib><creatorcontrib>Zhang, Haiming</creatorcontrib><creatorcontrib>Li, Jianhui</creatorcontrib><creatorcontrib>Xie, Gaogang</creatorcontrib><creatorcontrib>Wen, Xidao</creatorcontrib><creatorcontrib>Nie, Xiaohui</creatorcontrib><creatorcontrib>Ma, Minghua</creatorcontrib><creatorcontrib>Pei, Dan</creatorcontrib><title>OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models</title><description>Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operations has become a new trend. Large language models (LLMs) that have exhibited remarkable capabilities in NLP-related tasks, are showing great potential in the field of AIOps, such as in aspects of root cause analysis of failures, generation of operations and maintenance scripts, and summarizing of alert information. Nevertheless, the performance of current LLMs in Ops tasks is yet to be determined. In this paper, we present OpsEval, a comprehensive task-oriented Ops benchmark designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in various crucial scenarios at different ability levels. The benchmark includes 7184 multi-choice questions and 1736 question-answering (QA) formats in English and Chinese. By conducting a comprehensive performance evaluation of the current leading large language models, we show how various LLM techniques can affect the performance of Ops, and discussed findings related to various topics, including model quantification, QA evaluation, and hallucination issues. To ensure the credibility of our evaluation, we invite dozens of domain experts to manually review our questions. At the same time, we have open-sourced 20% of the test QA to assist current researchers in preliminary evaluations of their OpsLLM models. The remaining 80% of the data, which is not disclosed, is used to eliminate the issue of the test set leakage. Additionally, we have constructed an online leaderboard that is updated in real-time and will continue to be updated, ensuring that any newly emerging LLMs will be evaluated promptly. Both our dataset and leaderboard have been made public.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Networking and Internet Architecture</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgYKGJibGZtzMgT4FxS7liXmWCk4Kjjn5xYUpWak5hVnlqUqeIYo-BekFiWWZObnFSs4peYlZ-QmFmUrBJdmlqQqpOUXKfgkFqWnAsm89NJEIMM3PyU1p5iHgTUtMac4lRdKczPIu7mGOHvogu2OLyjKBJpSGQ9yQzzYDcaEVQAAhxk7uA</recordid><startdate>20231011</startdate><enddate>20231011</enddate><creator>Liu, Yuhe</creator><creator>Pei, Changhua</creator><creator>Xu, Longlong</creator><creator>Chen, Bohan</creator><creator>Sun, Mingze</creator><creator>Zhang, Zhirui</creator><creator>Sun, Yongqian</creator><creator>Zhang, Shenglin</creator><creator>Wang, Kun</creator><creator>Zhang, Haiming</creator><creator>Li, Jianhui</creator><creator>Xie, Gaogang</creator><creator>Wen, Xidao</creator><creator>Nie, Xiaohui</creator><creator>Ma, Minghua</creator><creator>Pei, Dan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231011</creationdate><title>OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models</title><author>Liu, Yuhe ; Pei, Changhua ; Xu, Longlong ; Chen, Bohan ; Sun, Mingze ; Zhang, Zhirui ; Sun, Yongqian ; Zhang, Shenglin ; Wang, Kun ; Zhang, Haiming ; Li, Jianhui ; Xie, Gaogang ; Wen, Xidao ; Nie, Xiaohui ; Ma, Minghua ; Pei, Dan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2310_076373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Networking and Internet Architecture</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yuhe</creatorcontrib><creatorcontrib>Pei, Changhua</creatorcontrib><creatorcontrib>Xu, Longlong</creatorcontrib><creatorcontrib>Chen, Bohan</creatorcontrib><creatorcontrib>Sun, Mingze</creatorcontrib><creatorcontrib>Zhang, Zhirui</creatorcontrib><creatorcontrib>Sun, Yongqian</creatorcontrib><creatorcontrib>Zhang, Shenglin</creatorcontrib><creatorcontrib>Wang, Kun</creatorcontrib><creatorcontrib>Zhang, Haiming</creatorcontrib><creatorcontrib>Li, Jianhui</creatorcontrib><creatorcontrib>Xie, Gaogang</creatorcontrib><creatorcontrib>Wen, Xidao</creatorcontrib><creatorcontrib>Nie, Xiaohui</creatorcontrib><creatorcontrib>Ma, Minghua</creatorcontrib><creatorcontrib>Pei, Dan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Yuhe</au><au>Pei, Changhua</au><au>Xu, Longlong</au><au>Chen, Bohan</au><au>Sun, Mingze</au><au>Zhang, Zhirui</au><au>Sun, Yongqian</au><au>Zhang, Shenglin</au><au>Wang, Kun</au><au>Zhang, Haiming</au><au>Li, Jianhui</au><au>Xie, Gaogang</au><au>Wen, Xidao</au><au>Nie, Xiaohui</au><au>Ma, Minghua</au><au>Pei, Dan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models</atitle><date>2023-10-11</date><risdate>2023</risdate><abstract>Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operations has become a new trend. Large language models (LLMs) that have exhibited remarkable capabilities in NLP-related tasks, are showing great potential in the field of AIOps, such as in aspects of root cause analysis of failures, generation of operations and maintenance scripts, and summarizing of alert information. Nevertheless, the performance of current LLMs in Ops tasks is yet to be determined. In this paper, we present OpsEval, a comprehensive task-oriented Ops benchmark designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in various crucial scenarios at different ability levels. The benchmark includes 7184 multi-choice questions and 1736 question-answering (QA) formats in English and Chinese. By conducting a comprehensive performance evaluation of the current leading large language models, we show how various LLM techniques can affect the performance of Ops, and discussed findings related to various topics, including model quantification, QA evaluation, and hallucination issues. To ensure the credibility of our evaluation, we invite dozens of domain experts to manually review our questions. At the same time, we have open-sourced 20% of the test QA to assist current researchers in preliminary evaluations of their OpsLLM models. The remaining 80% of the data, which is not disclosed, is used to eliminate the issue of the test set leakage. Additionally, we have constructed an online leaderboard that is updated in real-time and will continue to be updated, ensuring that any newly emerging LLMs will be evaluated promptly. Both our dataset and leaderboard have been made public.</abstract><doi>10.48550/arxiv.2310.07637</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.07637
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_07637
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Networking and Internet Architecture
title OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T02%3A40%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=OpsEval:%20A%20Comprehensive%20IT%20Operations%20Benchmark%20Suite%20for%20Large%20Language%20Models&rft.au=Liu,%20Yuhe&rft.date=2023-10-11&rft_id=info:doi/10.48550/arxiv.2310.07637&rft_dat=%3Carxiv_GOX%3E2310_07637%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true