From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads
This research article critically examines the potential risks and implications arising from the malicious utilization of large language models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these large language models have numerous beneficial applications, the misuse of this...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This research article critically examines the potential risks and
implications arising from the malicious utilization of large language
models(LLM), focusing specifically on ChatGPT and Google's Bard. Although these
large language models have numerous beneficial applications, the misuse of this
technology by cybercriminals for creating offensive payloads and tools is a
significant concern. In this study, we systematically generated implementable
code for the top-10 MITRE Techniques prevalent in 2022, utilizing ChatGPT, and
conduct a comparative analysis of its performance with Google's Bard. Our
experimentation reveals that ChatGPT has the potential to enable attackers to
accelerate the operation of more targeted and sophisticated attacks.
Additionally, the technology provides amateur attackers with more capabilities
to perform a wide range of attacks and empowers script kiddies to develop
customized tools that contribute to the acceleration of cybercrime.
Furthermore, LLMs significantly benefits malware authors, particularly
ransomware gangs, in generating sophisticated variants of wiper and ransomware
attacks with ease. On a positive note, our study also highlights how offensive
security researchers and pentesters can make use of LLMs to simulate realistic
attack scenarios, identify potential vulnerabilities, and better protect
organizations. Overall, we conclude by emphasizing the need for increased
vigilance in mitigating the risks associated with LLMs. This includes
implementing robust security measures, increasing awareness and education
around the potential risks of this technology, and collaborating with security
experts to stay ahead of emerging threats. |
---|---|
DOI: | 10.48550/arxiv.2305.15336 |