Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion

Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yuan, Kan, Tang, Di, Liao, Xiaojing, Wang, XiaoFeng, Feng, Xuan, Chen, Yi, Sun, Menghan, Lu, Haoran, Zhang, Kehuan
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 966
container_issue
container_start_page 952
container_title
container_volume
creator Yuan, Kan
Tang, Di
Liao, Xiaojing
Wang, XiaoFeng
Feng, Xuan
Chen, Yi
Sun, Menghan
Lu, Haoran
Zhang, Kehuan
description Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.
doi_str_mv 10.1109/SP.2019.00032
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_8835391</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8835391</ieee_id><sourcerecordid>8835391</sourcerecordid><originalsourceid>FETCH-LOGICAL-i214t-23bd52d0a51dba176640974a077fc643f19c28b5769c8a8d933d4a9bb12f0cf93</originalsourceid><addsrcrecordid>eNotjE9LwzAcQKMgOOeOnrzkC3T-8q9JvI2hszBYcY4dR9qkM5ImkhZh396Cvss7PHgIPRBYEgL6aV8vKRC9BABGr9BCS0UEU-UE6Gs0o0yKglCQt-huGL4AKDDNZ-i4H50J4-cF1ynHZ3yI1uVhNNH6eMbvUyuOKQeLV_ZnCiZ7E3DVm7MbcJcyrkLwrR_xLgYfHa5z6tPoU7xHN50Jg1v8e44Ory8f67diu9tU69W28JTwsaCssYJaMILYxhBZlhy05Aak7NqSs47olqpGyFK3yiirGbPc6KYhtIO202yOHv--3jl3-s6-N_lyUooJpgn7BcMyULc</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion</title><source>IEEE Electronic Library (IEL)</source><creator>Yuan, Kan ; Tang, Di ; Liao, Xiaojing ; Wang, XiaoFeng ; Feng, Xuan ; Chen, Yi ; Sun, Menghan ; Lu, Haoran ; Zhang, Kehuan</creator><creatorcontrib>Yuan, Kan ; Tang, Di ; Liao, Xiaojing ; Wang, XiaoFeng ; Feng, Xuan ; Chen, Yi ; Sun, Menghan ; Lu, Haoran ; Zhang, Kehuan</creatorcontrib><description>Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.</description><identifier>EISSN: 2375-1207</identifier><identifier>EISBN: 9781538666609</identifier><identifier>EISBN: 153866660X</identifier><identifier>DOI: 10.1109/SP.2019.00032</identifier><language>eng</language><publisher>IEEE</publisher><subject>adversarial-images ; Advertising ; cybercrime ; Deep learning ; Detectors ; Ecosystems ; Feature extraction ; Google ; Image recognition</subject><ispartof>Proceedings - IEEE Symposium on Security and Privacy, 2019, p.952-966</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8835391$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,796,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8835391$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yuan, Kan</creatorcontrib><creatorcontrib>Tang, Di</creatorcontrib><creatorcontrib>Liao, Xiaojing</creatorcontrib><creatorcontrib>Wang, XiaoFeng</creatorcontrib><creatorcontrib>Feng, Xuan</creatorcontrib><creatorcontrib>Chen, Yi</creatorcontrib><creatorcontrib>Sun, Menghan</creatorcontrib><creatorcontrib>Lu, Haoran</creatorcontrib><creatorcontrib>Zhang, Kehuan</creatorcontrib><title>Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion</title><title>Proceedings - IEEE Symposium on Security and Privacy</title><addtitle>SP</addtitle><description>Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.</description><subject>adversarial-images</subject><subject>Advertising</subject><subject>cybercrime</subject><subject>Deep learning</subject><subject>Detectors</subject><subject>Ecosystems</subject><subject>Feature extraction</subject><subject>Google</subject><subject>Image recognition</subject><issn>2375-1207</issn><isbn>9781538666609</isbn><isbn>153866660X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjE9LwzAcQKMgOOeOnrzkC3T-8q9JvI2hszBYcY4dR9qkM5ImkhZh396Cvss7PHgIPRBYEgL6aV8vKRC9BABGr9BCS0UEU-UE6Gs0o0yKglCQt-huGL4AKDDNZ-i4H50J4-cF1ynHZ3yI1uVhNNH6eMbvUyuOKQeLV_ZnCiZ7E3DVm7MbcJcyrkLwrR_xLgYfHa5z6tPoU7xHN50Jg1v8e44Ory8f67diu9tU69W28JTwsaCssYJaMILYxhBZlhy05Aak7NqSs47olqpGyFK3yiirGbPc6KYhtIO202yOHv--3jl3-s6-N_lyUooJpgn7BcMyULc</recordid><startdate>20190501</startdate><enddate>20190501</enddate><creator>Yuan, Kan</creator><creator>Tang, Di</creator><creator>Liao, Xiaojing</creator><creator>Wang, XiaoFeng</creator><creator>Feng, Xuan</creator><creator>Chen, Yi</creator><creator>Sun, Menghan</creator><creator>Lu, Haoran</creator><creator>Zhang, Kehuan</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20190501</creationdate><title>Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion</title><author>Yuan, Kan ; Tang, Di ; Liao, Xiaojing ; Wang, XiaoFeng ; Feng, Xuan ; Chen, Yi ; Sun, Menghan ; Lu, Haoran ; Zhang, Kehuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i214t-23bd52d0a51dba176640974a077fc643f19c28b5769c8a8d933d4a9bb12f0cf93</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>adversarial-images</topic><topic>Advertising</topic><topic>cybercrime</topic><topic>Deep learning</topic><topic>Detectors</topic><topic>Ecosystems</topic><topic>Feature extraction</topic><topic>Google</topic><topic>Image recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Yuan, Kan</creatorcontrib><creatorcontrib>Tang, Di</creatorcontrib><creatorcontrib>Liao, Xiaojing</creatorcontrib><creatorcontrib>Wang, XiaoFeng</creatorcontrib><creatorcontrib>Feng, Xuan</creatorcontrib><creatorcontrib>Chen, Yi</creatorcontrib><creatorcontrib>Sun, Menghan</creatorcontrib><creatorcontrib>Lu, Haoran</creatorcontrib><creatorcontrib>Zhang, Kehuan</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yuan, Kan</au><au>Tang, Di</au><au>Liao, Xiaojing</au><au>Wang, XiaoFeng</au><au>Feng, Xuan</au><au>Chen, Yi</au><au>Sun, Menghan</au><au>Lu, Haoran</au><au>Zhang, Kehuan</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion</atitle><btitle>Proceedings - IEEE Symposium on Security and Privacy</btitle><stitle>SP</stitle><date>2019-05-01</date><risdate>2019</risdate><spage>952</spage><epage>966</epage><pages>952-966</pages><eissn>2375-1207</eissn><eisbn>9781538666609</eisbn><eisbn>153866660X</eisbn><abstract>Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.</abstract><pub>IEEE</pub><doi>10.1109/SP.2019.00032</doi><tpages>15</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2375-1207
ispartof Proceedings - IEEE Symposium on Security and Privacy, 2019, p.952-966
issn 2375-1207
language eng
recordid cdi_ieee_primary_8835391
source IEEE Electronic Library (IEL)
subjects adversarial-images
Advertising
cybercrime
Deep learning
Detectors
Ecosystems
Feature extraction
Google
Image recognition
title Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T09%3A11%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Stealthy%20Porn:%20Understanding%20Real-World%20Adversarial%20Images%20for%20Illicit%20Online%20Promotion&rft.btitle=Proceedings%20-%20IEEE%20Symposium%20on%20Security%20and%20Privacy&rft.au=Yuan,%20Kan&rft.date=2019-05-01&rft.spage=952&rft.epage=966&rft.pages=952-966&rft.eissn=2375-1207&rft_id=info:doi/10.1109/SP.2019.00032&rft_dat=%3Cieee_RIE%3E8835391%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781538666609&rft.eisbn_list=153866660X&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8835391&rfr_iscdi=true