Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly dur...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on information forensics and security 2024, Vol.19, p.2143-2156 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2156 |
---|---|
container_issue | |
container_start_page | 2143 |
container_title | IEEE transactions on information forensics and security |
container_volume | 19 |
creator | Fang, Shengbang Stamm, Matthew C. |
description | Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly during analysis or implicitly during training. At the same time, deep learning has enabled new forms of anti-forensic attacks, such as adversarial examples and generative adversarial network (GAN) based attacks. Thus far, however, no anti-forensic attack has been demonstrated against image splicing detection and localization algorithms. In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms such as EXIF-Net, Noiseprint, and Forensic Similarity Graphs. This attack operates by adversarially training an anti-forensic generator against a set of Siamese neural networks so that it is able to create synthetic forensic traces. Under analysis, these synthetic traces appear authentic and are self-consistent throughout an image. Through a series of experiments, we demonstrate that our attack is capable of fooling forensic splicing detection and localization algorithms without introducing visually detectable artifacts into an attacked image. Additionally, we demonstrate that our attack outperforms existing alternative attack approaches. |
doi_str_mv | 10.1109/TIFS.2023.3346312 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIFS_2023_3346312</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10375138</ieee_id><sourcerecordid>2909277155</sourcerecordid><originalsourceid>FETCH-LOGICAL-c289t-4b8f319da1ad805913f56e04ee05d05dc1bf93f1999792eb93bf0bd1d24c945e3</originalsourceid><addsrcrecordid>eNpNkE1LAzEQhoMoWKs_QPCw4HlrJtmvHEu1Wih4aHsO2eykTd3u1iQ91F_fXVtEGJgPnneGeQl5BDoCoOJlOZsuRowyPuI8yTiwKzKANM3ijDK4_quB35I777eUJglkxYCsxiEo_WWbdTTbqTVGi31tdd--YkAdbNtEqqmieatVbX_U72Bcr1tnw2bno5Xv2cWxCRsMVkdLpzT6e3JjVO3x4ZKHZDV9W04-4vnn-2wynseaFSLESVkYDqJSoKqCpgK4STOkCSJNqy40lEZwA0KIXDAsBS8NLSuoWKJFkiIfkufz3r1rvw_og9y2B9d0JyUTVLA87_7uKDhT2rXeOzRy7-xOuaMEKnv3ZO-e7N2TF_c6zdNZYxHxH8_zFHjBTyoWa8E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2909277155</pqid></control><display><type>article</type><title>Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces</title><source>IEEE Electronic Library (IEL)</source><creator>Fang, Shengbang ; Stamm, Matthew C.</creator><creatorcontrib>Fang, Shengbang ; Stamm, Matthew C.</creatorcontrib><description>Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly during analysis or implicitly during training. At the same time, deep learning has enabled new forms of anti-forensic attacks, such as adversarial examples and generative adversarial network (GAN) based attacks. Thus far, however, no anti-forensic attack has been demonstrated against image splicing detection and localization algorithms. In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms such as EXIF-Net, Noiseprint, and Forensic Similarity Graphs. This attack operates by adversarially training an anti-forensic generator against a set of Siamese neural networks so that it is able to create synthetic forensic traces. Under analysis, these synthetic traces appear authentic and are self-consistent throughout an image. Through a series of experiments, we demonstrate that our attack is capable of fooling forensic splicing detection and localization algorithms without introducing visually detectable artifacts into an attacked image. Additionally, we demonstrate that our attack outperforms existing alternative attack approaches.</description><identifier>ISSN: 1556-6013</identifier><identifier>EISSN: 1556-6021</identifier><identifier>DOI: 10.1109/TIFS.2023.3346312</identifier><identifier>CODEN: ITIFA6</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>adversarial attacks ; Algorithms ; Anti-forensics ; Deep learning ; Detectors ; Feature extraction ; Forensic computing ; Forensics ; Generative adversarial networks ; Generators ; Localization ; Location awareness ; Machine learning ; Neural networks ; Splicing ; splicing detection and localization ; Training</subject><ispartof>IEEE transactions on information forensics and security, 2024, Vol.19, p.2143-2156</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c289t-4b8f319da1ad805913f56e04ee05d05dc1bf93f1999792eb93bf0bd1d24c945e3</cites><orcidid>0000-0003-0317-6813 ; 0000-0002-3986-4039</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10375138$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10375138$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Fang, Shengbang</creatorcontrib><creatorcontrib>Stamm, Matthew C.</creatorcontrib><title>Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces</title><title>IEEE transactions on information forensics and security</title><addtitle>TIFS</addtitle><description>Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly during analysis or implicitly during training. At the same time, deep learning has enabled new forms of anti-forensic attacks, such as adversarial examples and generative adversarial network (GAN) based attacks. Thus far, however, no anti-forensic attack has been demonstrated against image splicing detection and localization algorithms. In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms such as EXIF-Net, Noiseprint, and Forensic Similarity Graphs. This attack operates by adversarially training an anti-forensic generator against a set of Siamese neural networks so that it is able to create synthetic forensic traces. Under analysis, these synthetic traces appear authentic and are self-consistent throughout an image. Through a series of experiments, we demonstrate that our attack is capable of fooling forensic splicing detection and localization algorithms without introducing visually detectable artifacts into an attacked image. Additionally, we demonstrate that our attack outperforms existing alternative attack approaches.</description><subject>adversarial attacks</subject><subject>Algorithms</subject><subject>Anti-forensics</subject><subject>Deep learning</subject><subject>Detectors</subject><subject>Feature extraction</subject><subject>Forensic computing</subject><subject>Forensics</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Localization</subject><subject>Location awareness</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Splicing</subject><subject>splicing detection and localization</subject><subject>Training</subject><issn>1556-6013</issn><issn>1556-6021</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1LAzEQhoMoWKs_QPCw4HlrJtmvHEu1Wih4aHsO2eykTd3u1iQ91F_fXVtEGJgPnneGeQl5BDoCoOJlOZsuRowyPuI8yTiwKzKANM3ijDK4_quB35I777eUJglkxYCsxiEo_WWbdTTbqTVGi31tdd--YkAdbNtEqqmieatVbX_U72Bcr1tnw2bno5Xv2cWxCRsMVkdLpzT6e3JjVO3x4ZKHZDV9W04-4vnn-2wynseaFSLESVkYDqJSoKqCpgK4STOkCSJNqy40lEZwA0KIXDAsBS8NLSuoWKJFkiIfkufz3r1rvw_og9y2B9d0JyUTVLA87_7uKDhT2rXeOzRy7-xOuaMEKnv3ZO-e7N2TF_c6zdNZYxHxH8_zFHjBTyoWa8E</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Fang, Shengbang</creator><creator>Stamm, Matthew C.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-0317-6813</orcidid><orcidid>https://orcid.org/0000-0002-3986-4039</orcidid></search><sort><creationdate>2024</creationdate><title>Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces</title><author>Fang, Shengbang ; Stamm, Matthew C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c289t-4b8f319da1ad805913f56e04ee05d05dc1bf93f1999792eb93bf0bd1d24c945e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>adversarial attacks</topic><topic>Algorithms</topic><topic>Anti-forensics</topic><topic>Deep learning</topic><topic>Detectors</topic><topic>Feature extraction</topic><topic>Forensic computing</topic><topic>Forensics</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Localization</topic><topic>Location awareness</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Splicing</topic><topic>splicing detection and localization</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fang, Shengbang</creatorcontrib><creatorcontrib>Stamm, Matthew C.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on information forensics and security</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fang, Shengbang</au><au>Stamm, Matthew C.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces</atitle><jtitle>IEEE transactions on information forensics and security</jtitle><stitle>TIFS</stitle><date>2024</date><risdate>2024</risdate><volume>19</volume><spage>2143</spage><epage>2156</epage><pages>2143-2156</pages><issn>1556-6013</issn><eissn>1556-6021</eissn><coden>ITIFA6</coden><abstract>Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly during analysis or implicitly during training. At the same time, deep learning has enabled new forms of anti-forensic attacks, such as adversarial examples and generative adversarial network (GAN) based attacks. Thus far, however, no anti-forensic attack has been demonstrated against image splicing detection and localization algorithms. In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms such as EXIF-Net, Noiseprint, and Forensic Similarity Graphs. This attack operates by adversarially training an anti-forensic generator against a set of Siamese neural networks so that it is able to create synthetic forensic traces. Under analysis, these synthetic traces appear authentic and are self-consistent throughout an image. Through a series of experiments, we demonstrate that our attack is capable of fooling forensic splicing detection and localization algorithms without introducing visually detectable artifacts into an attacked image. Additionally, we demonstrate that our attack outperforms existing alternative attack approaches.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIFS.2023.3346312</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-0317-6813</orcidid><orcidid>https://orcid.org/0000-0002-3986-4039</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1556-6013 |
ispartof | IEEE transactions on information forensics and security, 2024, Vol.19, p.2143-2156 |
issn | 1556-6013 1556-6021 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TIFS_2023_3346312 |
source | IEEE Electronic Library (IEL) |
subjects | adversarial attacks Algorithms Anti-forensics Deep learning Detectors Feature extraction Forensic computing Forensics Generative adversarial networks Generators Localization Location awareness Machine learning Neural networks Splicing splicing detection and localization Training |
title | Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T06%3A32%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Attacking%20Image%20Splicing%20Detection%20and%20Localization%20Algorithms%20Using%20Synthetic%20Traces&rft.jtitle=IEEE%20transactions%20on%20information%20forensics%20and%20security&rft.au=Fang,%20Shengbang&rft.date=2024&rft.volume=19&rft.spage=2143&rft.epage=2156&rft.pages=2143-2156&rft.issn=1556-6013&rft.eissn=1556-6021&rft.coden=ITIFA6&rft_id=info:doi/10.1109/TIFS.2023.3346312&rft_dat=%3Cproquest_RIE%3E2909277155%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2909277155&rft_id=info:pmid/&rft_ieee_id=10375138&rfr_iscdi=true |