Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study
Preliminary security risk analysis (PSRA) provides a quick approach to identify, evaluate and propose remeditation to potential risks in specific scenarios. The extensive expertise required for an effective PSRA and the substantial ammount of textual-related tasks hinder quick assessments in mission...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Esposito, Matteo Palagiano, Francesco |
description | Preliminary security risk analysis (PSRA) provides a quick approach to identify, evaluate and propose remeditation to potential risks in specific scenarios. The extensive expertise required for an effective PSRA and the substantial ammount of textual-related tasks hinder quick assessments in mission-critical contexts, where timely and prompt actions are essential. The speed and accuracy of human experts in PSRA significantly impact response time. A large language model can quickly summarise information in less time than a human. To our knowledge, no prior study has explored the capabilities of fine-tuned models (FTM) in PSRA. Our case study investigates the proficiency of FTM to assist practitioners in PSRA. We manually curated 141 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years.We compared the proficiency of the FTM versus seven human experts. Within the industrial context, our approach has proven successful in reducing errors in PSRA, hastening security risk detection, and minimizing false positives and negatives. This translates to cost savings for the company by averting unnecessary expenses associated with implementing unwarranted countermeasures. Therefore, experts can focus on more comprehensive risk analysis, leveraging LLMs for an effective preliminary assessment within a condensed timeframe. |
doi_str_mv | 10.48550/arxiv.2403.15756 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2403_15756</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2986603402</sourcerecordid><originalsourceid>FETCH-LOGICAL-a956-cddd6aa553f762c78057257872c1b3dd0a18e243fbd9ed2e105560151d9633243</originalsourceid><addsrcrecordid>eNotkF9LwzAUxYMgOOY-gE8GfO7MnyZNfStFndChuL2XuyYtmV07k3bYb2_cfLn3wjlcfucgdEfJMlZCkEdwP_a0ZDHhSyoSIa_QjHFOIxUzdoMW3u8JIUwmTAg-Q3VhTsZBY7sGF-AaE2bXjBCOda9N63HdO_zhTGsPtgM34Y2pRmeHCX9a_4WzDtrJW_-EM7y23tu-i_Ig2wpanIM3eDOMerpF1zW03iz-9xxtX563-Soq3l_f8qyIIBUyqrTWEiCA1YlkVaKICJiJSlhFd1xrAlQZFvN6p1OjmaFECEmooDqVnAdhju4vb88llEdnDwG5_CujPJcRHA8Xx9H136PxQ7nvRxdC-JKlSkrCY8L4L0I_YYc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2986603402</pqid></control><display><type>article</type><title>Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Esposito, Matteo ; Palagiano, Francesco</creator><creatorcontrib>Esposito, Matteo ; Palagiano, Francesco</creatorcontrib><description>Preliminary security risk analysis (PSRA) provides a quick approach to identify, evaluate and propose remeditation to potential risks in specific scenarios. The extensive expertise required for an effective PSRA and the substantial ammount of textual-related tasks hinder quick assessments in mission-critical contexts, where timely and prompt actions are essential. The speed and accuracy of human experts in PSRA significantly impact response time. A large language model can quickly summarise information in less time than a human. To our knowledge, no prior study has explored the capabilities of fine-tuned models (FTM) in PSRA. Our case study investigates the proficiency of FTM to assist practitioners in PSRA. We manually curated 141 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years.We compared the proficiency of the FTM versus seven human experts. Within the industrial context, our approach has proven successful in reducing errors in PSRA, hastening security risk detection, and minimizing false positives and negatives. This translates to cost savings for the company by averting unnecessary expenses associated with implementing unwarranted countermeasures. Therefore, experts can focus on more comprehensive risk analysis, leveraging LLMs for an effective preliminary assessment within a condensed timeframe.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2403.15756</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Case studies ; Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computers and Society ; Computer Science - Cryptography and Security ; Computer Science - Software Engineering ; Context ; Impact response ; Large language models ; Risk analysis ; Risk assessment ; Security</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1145/3661167.3661226$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.15756$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Esposito, Matteo</creatorcontrib><creatorcontrib>Palagiano, Francesco</creatorcontrib><title>Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study</title><title>arXiv.org</title><description>Preliminary security risk analysis (PSRA) provides a quick approach to identify, evaluate and propose remeditation to potential risks in specific scenarios. The extensive expertise required for an effective PSRA and the substantial ammount of textual-related tasks hinder quick assessments in mission-critical contexts, where timely and prompt actions are essential. The speed and accuracy of human experts in PSRA significantly impact response time. A large language model can quickly summarise information in less time than a human. To our knowledge, no prior study has explored the capabilities of fine-tuned models (FTM) in PSRA. Our case study investigates the proficiency of FTM to assist practitioners in PSRA. We manually curated 141 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years.We compared the proficiency of the FTM versus seven human experts. Within the industrial context, our approach has proven successful in reducing errors in PSRA, hastening security risk detection, and minimizing false positives and negatives. This translates to cost savings for the company by averting unnecessary expenses associated with implementing unwarranted countermeasures. Therefore, experts can focus on more comprehensive risk analysis, leveraging LLMs for an effective preliminary assessment within a condensed timeframe.</description><subject>Case studies</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Software Engineering</subject><subject>Context</subject><subject>Impact response</subject><subject>Large language models</subject><subject>Risk analysis</subject><subject>Risk assessment</subject><subject>Security</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkF9LwzAUxYMgOOY-gE8GfO7MnyZNfStFndChuL2XuyYtmV07k3bYb2_cfLn3wjlcfucgdEfJMlZCkEdwP_a0ZDHhSyoSIa_QjHFOIxUzdoMW3u8JIUwmTAg-Q3VhTsZBY7sGF-AaE2bXjBCOda9N63HdO_zhTGsPtgM34Y2pRmeHCX9a_4WzDtrJW_-EM7y23tu-i_Ig2wpanIM3eDOMerpF1zW03iz-9xxtX563-Soq3l_f8qyIIBUyqrTWEiCA1YlkVaKICJiJSlhFd1xrAlQZFvN6p1OjmaFECEmooDqVnAdhju4vb88llEdnDwG5_CujPJcRHA8Xx9H136PxQ7nvRxdC-JKlSkrCY8L4L0I_YYc</recordid><startdate>20240323</startdate><enddate>20240323</enddate><creator>Esposito, Matteo</creator><creator>Palagiano, Francesco</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240323</creationdate><title>Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study</title><author>Esposito, Matteo ; Palagiano, Francesco</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a956-cddd6aa553f762c78057257872c1b3dd0a18e243fbd9ed2e105560151d9633243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Case studies</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Software Engineering</topic><topic>Context</topic><topic>Impact response</topic><topic>Large language models</topic><topic>Risk analysis</topic><topic>Risk assessment</topic><topic>Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Esposito, Matteo</creatorcontrib><creatorcontrib>Palagiano, Francesco</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Esposito, Matteo</au><au>Palagiano, Francesco</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study</atitle><jtitle>arXiv.org</jtitle><date>2024-03-23</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Preliminary security risk analysis (PSRA) provides a quick approach to identify, evaluate and propose remeditation to potential risks in specific scenarios. The extensive expertise required for an effective PSRA and the substantial ammount of textual-related tasks hinder quick assessments in mission-critical contexts, where timely and prompt actions are essential. The speed and accuracy of human experts in PSRA significantly impact response time. A large language model can quickly summarise information in less time than a human. To our knowledge, no prior study has explored the capabilities of fine-tuned models (FTM) in PSRA. Our case study investigates the proficiency of FTM to assist practitioners in PSRA. We manually curated 141 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years.We compared the proficiency of the FTM versus seven human experts. Within the industrial context, our approach has proven successful in reducing errors in PSRA, hastening security risk detection, and minimizing false positives and negatives. This translates to cost savings for the company by averting unnecessary expenses associated with implementing unwarranted countermeasures. Therefore, experts can focus on more comprehensive risk analysis, leveraging LLMs for an effective preliminary assessment within a condensed timeframe.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2403.15756</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2403_15756 |
source | arXiv.org; Free E- Journals |
subjects | Case studies Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Computers and Society Computer Science - Cryptography and Security Computer Science - Software Engineering Context Impact response Large language models Risk analysis Risk assessment Security |
title | Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T22%3A29%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Leveraging%20Large%20Language%20Models%20for%20Preliminary%20Security%20Risk%20Analysis:%20A%20Mission-Critical%20Case%20Study&rft.jtitle=arXiv.org&rft.au=Esposito,%20Matteo&rft.date=2024-03-23&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2403.15756&rft_dat=%3Cproquest_arxiv%3E2986603402%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2986603402&rft_id=info:pmid/&rfr_iscdi=true |