AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns
SMS phishing, also known as "smishing", is a growing threat that tricks users into disclosing private information or clicking into URLs with malicious content through fraudulent mobile text messages. In recent past, we have also observed a rapid advancement of conversational generative AI...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Shibli, Ashfak Md Pritom, Mir Mehedi A Gupta, Maanak |
description | SMS phishing, also known as "smishing", is a growing threat that tricks users
into disclosing private information or clicking into URLs with malicious
content through fraudulent mobile text messages. In recent past, we have also
observed a rapid advancement of conversational generative AI chatbot services
(e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained large
language models (LLMs). These AI chatbots certainly have a lot of utilities but
it is not systematically understood how they can play a role in creating
threats and attacks. In this paper, we propose AbuseGPT method to show how the
existing generative AI-based chatbot services can be exploited by attackers in
real world to create smishing texts and eventually lead to craftier smishing
campaigns. To the best of our knowledge, there is no pre-existing work that
evidently shows the impacts of these generative text-based models on creating
SMS phishing. Thus, we believe this study is the first of its kind to shed
light on this emerging cybersecurity threat. We have found strong empirical
evidences to show that attackers can exploit ethical standards in the existing
generative AI-based chatbot services by crafting prompt injection attacks to
create newer smishing campaigns. We also discuss some future research
directions and guidelines to protect the abuse of generative AI-based services
and safeguard users from smishing attacks. |
doi_str_mv | 10.48550/arxiv.2402.09728 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_09728</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_09728</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-c613ac9dca09d1c310bff183deeaa0d4a75ca2177768db6ab4cd9dcabc283c4a3</originalsourceid><addsrcrecordid>eNotz0tOwzAYBGBvWKDCAVjhCyT4ldhhFywaKlUqEtlHvx9pLZGksk0Ft0cNrGYWo5E-hB4oKYWqKvIE8TtcSiYIK0kjmbpF29Z8Jd-99894bXgZcednHyGHi8ftDusT5JclJ5wXrKOH7PHHFNIpzEesYTpDOM7pDt2M8Jn8_X9uUL997fVbsT90O93uC6ilKmxNOdjGWSCNo5ZTYsaRKu68ByBOgKwsMCqlrJUzNRhh3XVtLFPcCuAb9Ph3u0KGcwwTxJ_hChpWEP8FWTFGGg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns</title><source>arXiv.org</source><creator>Shibli, Ashfak Md ; Pritom, Mir Mehedi A ; Gupta, Maanak</creator><creatorcontrib>Shibli, Ashfak Md ; Pritom, Mir Mehedi A ; Gupta, Maanak</creatorcontrib><description>SMS phishing, also known as "smishing", is a growing threat that tricks users
into disclosing private information or clicking into URLs with malicious
content through fraudulent mobile text messages. In recent past, we have also
observed a rapid advancement of conversational generative AI chatbot services
(e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained large
language models (LLMs). These AI chatbots certainly have a lot of utilities but
it is not systematically understood how they can play a role in creating
threats and attacks. In this paper, we propose AbuseGPT method to show how the
existing generative AI-based chatbot services can be exploited by attackers in
real world to create smishing texts and eventually lead to craftier smishing
campaigns. To the best of our knowledge, there is no pre-existing work that
evidently shows the impacts of these generative text-based models on creating
SMS phishing. Thus, we believe this study is the first of its kind to shed
light on this emerging cybersecurity threat. We have found strong empirical
evidences to show that attackers can exploit ethical standards in the existing
generative AI-based chatbot services by crafting prompt injection attacks to
create newer smishing campaigns. We also discuss some future research
directions and guidelines to protect the abuse of generative AI-based services
and safeguard users from smishing attacks.</description><identifier>DOI: 10.48550/arxiv.2402.09728</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Cryptography and Security</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.09728$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.09728$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shibli, Ashfak Md</creatorcontrib><creatorcontrib>Pritom, Mir Mehedi A</creatorcontrib><creatorcontrib>Gupta, Maanak</creatorcontrib><title>AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns</title><description>SMS phishing, also known as "smishing", is a growing threat that tricks users
into disclosing private information or clicking into URLs with malicious
content through fraudulent mobile text messages. In recent past, we have also
observed a rapid advancement of conversational generative AI chatbot services
(e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained large
language models (LLMs). These AI chatbots certainly have a lot of utilities but
it is not systematically understood how they can play a role in creating
threats and attacks. In this paper, we propose AbuseGPT method to show how the
existing generative AI-based chatbot services can be exploited by attackers in
real world to create smishing texts and eventually lead to craftier smishing
campaigns. To the best of our knowledge, there is no pre-existing work that
evidently shows the impacts of these generative text-based models on creating
SMS phishing. Thus, we believe this study is the first of its kind to shed
light on this emerging cybersecurity threat. We have found strong empirical
evidences to show that attackers can exploit ethical standards in the existing
generative AI-based chatbot services by crafting prompt injection attacks to
create newer smishing campaigns. We also discuss some future research
directions and guidelines to protect the abuse of generative AI-based services
and safeguard users from smishing attacks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Cryptography and Security</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz0tOwzAYBGBvWKDCAVjhCyT4ldhhFywaKlUqEtlHvx9pLZGksk0Ft0cNrGYWo5E-hB4oKYWqKvIE8TtcSiYIK0kjmbpF29Z8Jd-99894bXgZcednHyGHi8ftDusT5JclJ5wXrKOH7PHHFNIpzEesYTpDOM7pDt2M8Jn8_X9uUL997fVbsT90O93uC6ilKmxNOdjGWSCNo5ZTYsaRKu68ByBOgKwsMCqlrJUzNRhh3XVtLFPcCuAb9Ph3u0KGcwwTxJ_hChpWEP8FWTFGGg</recordid><startdate>20240215</startdate><enddate>20240215</enddate><creator>Shibli, Ashfak Md</creator><creator>Pritom, Mir Mehedi A</creator><creator>Gupta, Maanak</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240215</creationdate><title>AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns</title><author>Shibli, Ashfak Md ; Pritom, Mir Mehedi A ; Gupta, Maanak</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-c613ac9dca09d1c310bff183deeaa0d4a75ca2177768db6ab4cd9dcabc283c4a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Cryptography and Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Shibli, Ashfak Md</creatorcontrib><creatorcontrib>Pritom, Mir Mehedi A</creatorcontrib><creatorcontrib>Gupta, Maanak</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shibli, Ashfak Md</au><au>Pritom, Mir Mehedi A</au><au>Gupta, Maanak</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns</atitle><date>2024-02-15</date><risdate>2024</risdate><abstract>SMS phishing, also known as "smishing", is a growing threat that tricks users
into disclosing private information or clicking into URLs with malicious
content through fraudulent mobile text messages. In recent past, we have also
observed a rapid advancement of conversational generative AI chatbot services
(e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained large
language models (LLMs). These AI chatbots certainly have a lot of utilities but
it is not systematically understood how they can play a role in creating
threats and attacks. In this paper, we propose AbuseGPT method to show how the
existing generative AI-based chatbot services can be exploited by attackers in
real world to create smishing texts and eventually lead to craftier smishing
campaigns. To the best of our knowledge, there is no pre-existing work that
evidently shows the impacts of these generative text-based models on creating
SMS phishing. Thus, we believe this study is the first of its kind to shed
light on this emerging cybersecurity threat. We have found strong empirical
evidences to show that attackers can exploit ethical standards in the existing
generative AI-based chatbot services by crafting prompt injection attacks to
create newer smishing campaigns. We also discuss some future research
directions and guidelines to protect the abuse of generative AI-based services
and safeguard users from smishing attacks.</abstract><doi>10.48550/arxiv.2402.09728</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2402.09728 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2402_09728 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Cryptography and Security |
title | AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T05%3A41%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AbuseGPT:%20Abuse%20of%20Generative%20AI%20ChatBots%20to%20Create%20Smishing%20Campaigns&rft.au=Shibli,%20Ashfak%20Md&rft.date=2024-02-15&rft_id=info:doi/10.48550/arxiv.2402.09728&rft_dat=%3Carxiv_GOX%3E2402_09728%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |