From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads
This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-05 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sai Charan, P V Chunduri, Hrushikesh Anand, P Mohan Shukla, Sandeep K |
description | This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this technology by cybercriminals for creating offensive payloads and tools is a significant concern. In this study, we systematically generated implementable code for the top-10 MITRE Techniques prevalent in 2022, utilizing ChatGPT, and conduct a comparative analysis of its performance with Google's Bard. Our experimentation reveals that ChatGPT has the potential to enable attackers to accelerate the operation of more targeted and sophisticated attacks. Additionally, the technology provides amateur attackers with more capabilities to perform a wide range of attacks and empowers script kiddies to develop customized tools that contribute to the acceleration of cybercrime. Furthermore, LLMs significantly benefits malware authors, particularly ransomware gangs, in generating sophisticated variants of wiper and ransomware attacks with ease. On a positive note, our study also highlights how offensive security researchers and pentesters can make use of LLMs to simulate realistic attack scenarios, identify potential vulnerabilities, and better protect organizations. Overall, we conclude by emphasizing the need for increased vigilance in mitigating the risks associated with LLMs. This includes implementing robust security measures, increasing awareness and education around the potential risks of this technology, and collaborating with security experts to stay ahead of emerging threats. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2819149288</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2819149288</sourcerecordid><originalsourceid>FETCH-proquest_journals_28191492883</originalsourceid><addsrcrecordid>eNqNTdsKgkAUXIKgKP_hQM-BrlrWW4RdICHCnmOzo1nbntpdQf8-hT6gl7kwM0yPDbnve9Mo4HzAHGMeruvy2ZyHoT9k9UbTC1KsLViCZJ-e4tZld1V-KjRLiOu3JF2qAuwdIRGyzEqqDJwNAuVwELrAFlVRiVYkdENpICcNW1Sohe2W6-aKGlbWiuwJR9FIEjczZv1cSIPOj0dssonT9W761tRd28uDKq3a6MIjb-EFCx5F_n-tL3DRTF4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2819149288</pqid></control><display><type>article</type><title>From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads</title><source>Free E- Journals</source><creator>Sai Charan, P V ; Chunduri, Hrushikesh ; Anand, P Mohan ; Shukla, Sandeep K</creator><creatorcontrib>Sai Charan, P V ; Chunduri, Hrushikesh ; Anand, P Mohan ; Shukla, Sandeep K</creatorcontrib><description>This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this technology by cybercriminals for creating offensive payloads and tools is a significant concern. In this study, we systematically generated implementable code for the top-10 MITRE Techniques prevalent in 2022, utilizing ChatGPT, and conduct a comparative analysis of its performance with Google's Bard. Our experimentation reveals that ChatGPT has the potential to enable attackers to accelerate the operation of more targeted and sophisticated attacks. Additionally, the technology provides amateur attackers with more capabilities to perform a wide range of attacks and empowers script kiddies to develop customized tools that contribute to the acceleration of cybercrime. Furthermore, LLMs significantly benefits malware authors, particularly ransomware gangs, in generating sophisticated variants of wiper and ransomware attacks with ease. On a positive note, our study also highlights how offensive security researchers and pentesters can make use of LLMs to simulate realistic attack scenarios, identify potential vulnerabilities, and better protect organizations. Overall, we conclude by emphasizing the need for increased vigilance in mitigating the risks associated with LLMs. This includes implementing robust security measures, increasing awareness and education around the potential risks of this technology, and collaborating with security experts to stay ahead of emerging threats.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Chatbots ; Cybersecurity ; Large language models ; Malware ; Payloads ; Ransomware</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sai Charan, P V</creatorcontrib><creatorcontrib>Chunduri, Hrushikesh</creatorcontrib><creatorcontrib>Anand, P Mohan</creatorcontrib><creatorcontrib>Shukla, Sandeep K</creatorcontrib><title>From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads</title><title>arXiv.org</title><description>This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this technology by cybercriminals for creating offensive payloads and tools is a significant concern. In this study, we systematically generated implementable code for the top-10 MITRE Techniques prevalent in 2022, utilizing ChatGPT, and conduct a comparative analysis of its performance with Google's Bard. Our experimentation reveals that ChatGPT has the potential to enable attackers to accelerate the operation of more targeted and sophisticated attacks. Additionally, the technology provides amateur attackers with more capabilities to perform a wide range of attacks and empowers script kiddies to develop customized tools that contribute to the acceleration of cybercrime. Furthermore, LLMs significantly benefits malware authors, particularly ransomware gangs, in generating sophisticated variants of wiper and ransomware attacks with ease. On a positive note, our study also highlights how offensive security researchers and pentesters can make use of LLMs to simulate realistic attack scenarios, identify potential vulnerabilities, and better protect organizations. Overall, we conclude by emphasizing the need for increased vigilance in mitigating the risks associated with LLMs. This includes implementing robust security measures, increasing awareness and education around the potential risks of this technology, and collaborating with security experts to stay ahead of emerging threats.</description><subject>Chatbots</subject><subject>Cybersecurity</subject><subject>Large language models</subject><subject>Malware</subject><subject>Payloads</subject><subject>Ransomware</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNTdsKgkAUXIKgKP_hQM-BrlrWW4RdICHCnmOzo1nbntpdQf8-hT6gl7kwM0yPDbnve9Mo4HzAHGMeruvy2ZyHoT9k9UbTC1KsLViCZJ-e4tZld1V-KjRLiOu3JF2qAuwdIRGyzEqqDJwNAuVwELrAFlVRiVYkdENpICcNW1Sohe2W6-aKGlbWiuwJR9FIEjczZv1cSIPOj0dssonT9W761tRd28uDKq3a6MIjb-EFCx5F_n-tL3DRTF4</recordid><startdate>20230524</startdate><enddate>20230524</enddate><creator>Sai Charan, P V</creator><creator>Chunduri, Hrushikesh</creator><creator>Anand, P Mohan</creator><creator>Shukla, Sandeep K</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230524</creationdate><title>From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads</title><author>Sai Charan, P V ; Chunduri, Hrushikesh ; Anand, P Mohan ; Shukla, Sandeep K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28191492883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Chatbots</topic><topic>Cybersecurity</topic><topic>Large language models</topic><topic>Malware</topic><topic>Payloads</topic><topic>Ransomware</topic><toplevel>online_resources</toplevel><creatorcontrib>Sai Charan, P V</creatorcontrib><creatorcontrib>Chunduri, Hrushikesh</creatorcontrib><creatorcontrib>Anand, P Mohan</creatorcontrib><creatorcontrib>Shukla, Sandeep K</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sai Charan, P V</au><au>Chunduri, Hrushikesh</au><au>Anand, P Mohan</au><au>Shukla, Sandeep K</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads</atitle><jtitle>arXiv.org</jtitle><date>2023-05-24</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this technology by cybercriminals for creating offensive payloads and tools is a significant concern. In this study, we systematically generated implementable code for the top-10 MITRE Techniques prevalent in 2022, utilizing ChatGPT, and conduct a comparative analysis of its performance with Google's Bard. Our experimentation reveals that ChatGPT has the potential to enable attackers to accelerate the operation of more targeted and sophisticated attacks. Additionally, the technology provides amateur attackers with more capabilities to perform a wide range of attacks and empowers script kiddies to develop customized tools that contribute to the acceleration of cybercrime. Furthermore, LLMs significantly benefits malware authors, particularly ransomware gangs, in generating sophisticated variants of wiper and ransomware attacks with ease. On a positive note, our study also highlights how offensive security researchers and pentesters can make use of LLMs to simulate realistic attack scenarios, identify potential vulnerabilities, and better protect organizations. Overall, we conclude by emphasizing the need for increased vigilance in mitigating the risks associated with LLMs. This includes implementing robust security measures, increasing awareness and education around the potential risks of this technology, and collaborating with security experts to stay ahead of emerging threats.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2819149288 |
source | Free E- Journals |
subjects | Chatbots Cybersecurity Large language models Malware Payloads Ransomware |
title | From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T07%3A17%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=From%20Text%20to%20MITRE%20Techniques:%20Exploring%20the%20Malicious%20Use%20of%20Large%20Language%20Models%20for%20Generating%20Cyber%20Attack%20Payloads&rft.jtitle=arXiv.org&rft.au=Sai%20Charan,%20P%20V&rft.date=2023-05-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2819149288%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2819149288&rft_id=info:pmid/&rfr_iscdi=true |