Generalized Deepfake Attribution

The landscape of fake media creation changed with the introduction of Generative Adversarial Networks (GAN s). Fake media creation has been on the rise with the rapid advances in generation technology, leading to new challenges in Detecting fake media. A fundamental characteristic of GAN s is their...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shahid, Sowdagar Mahammad, Padhi, Sudev Kumar, Kashyap, Umesh, Ali, Sk. Subidh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shahid, Sowdagar Mahammad
Padhi, Sudev Kumar
Kashyap, Umesh
Ali, Sk. Subidh
description The landscape of fake media creation changed with the introduction of Generative Adversarial Networks (GAN s). Fake media creation has been on the rise with the rapid advances in generation technology, leading to new challenges in Detecting fake media. A fundamental characteristic of GAN s is their sensitivity to parameter initialization, known as seeds. Each distinct seed utilized during training leads to the creation of unique model instances, resulting in divergent image outputs despite employing the same architecture. This means that even if we have one GAN architecture, it can produce countless variations of GAN models depending on the seed used. Existing methods for attributing deepfakes work well only if they have seen the specific GAN model during training. If the GAN architectures are retrained with a different seed, these methods struggle to attribute the fakes. This seed dependency issue made it difficult to attribute deepfakes with existing methods. We proposed a generalized deepfake attribution network (GDA-N et) to attribute fake images to their respective GAN architectures, even if they are generated from a retrained version of the GAN architecture with a different seed (cross-seed) or from the fine-tuned version of the existing GAN model. Extensive experiments on cross-seed and fine-tuned data of GAN models show that our method is highly effective compared to existing methods. We have provided the source code to validate our results.
doi_str_mv 10.48550/arxiv.2406.18278
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_18278</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_18278</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-71f063200e2cc397691f982e557fb6bc605d4b9c3c2277db95d4c0063cbb2ab33</originalsourceid><addsrcrecordid>eNotzrsOgjAYBeAuDgZ9ACd5AbC09DYSvCYmLuykLX-TRkRS0ahP73U6OcM5-RCaZTjNJWN4ocPd31KSY55mkgg5RvEGOgi69U9o4iVA7_QR4mIYgjfXwZ-7CRo53V5g-s8IVetVVW6T_WGzK4t9ormQicgc5pRgDMRaqgRXmVOSAGPCGW4sx6zJjbLUEiJEY9S7WvyeWGOINpRGaP67_RLrPviTDo_6Q62_VPoCl1031A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Generalized Deepfake Attribution</title><source>arXiv.org</source><creator>Shahid, Sowdagar Mahammad ; Padhi, Sudev Kumar ; Kashyap, Umesh ; Ali, Sk. Subidh</creator><creatorcontrib>Shahid, Sowdagar Mahammad ; Padhi, Sudev Kumar ; Kashyap, Umesh ; Ali, Sk. Subidh</creatorcontrib><description>The landscape of fake media creation changed with the introduction of Generative Adversarial Networks (GAN s). Fake media creation has been on the rise with the rapid advances in generation technology, leading to new challenges in Detecting fake media. A fundamental characteristic of GAN s is their sensitivity to parameter initialization, known as seeds. Each distinct seed utilized during training leads to the creation of unique model instances, resulting in divergent image outputs despite employing the same architecture. This means that even if we have one GAN architecture, it can produce countless variations of GAN models depending on the seed used. Existing methods for attributing deepfakes work well only if they have seen the specific GAN model during training. If the GAN architectures are retrained with a different seed, these methods struggle to attribute the fakes. This seed dependency issue made it difficult to attribute deepfakes with existing methods. We proposed a generalized deepfake attribution network (GDA-N et) to attribute fake images to their respective GAN architectures, even if they are generated from a retrained version of the GAN architecture with a different seed (cross-seed) or from the fine-tuned version of the existing GAN model. Extensive experiments on cross-seed and fine-tuned data of GAN models show that our method is highly effective compared to existing methods. We have provided the source code to validate our results.</description><identifier>DOI: 10.48550/arxiv.2406.18278</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.18278$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.18278$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shahid, Sowdagar Mahammad</creatorcontrib><creatorcontrib>Padhi, Sudev Kumar</creatorcontrib><creatorcontrib>Kashyap, Umesh</creatorcontrib><creatorcontrib>Ali, Sk. Subidh</creatorcontrib><title>Generalized Deepfake Attribution</title><description>The landscape of fake media creation changed with the introduction of Generative Adversarial Networks (GAN s). Fake media creation has been on the rise with the rapid advances in generation technology, leading to new challenges in Detecting fake media. A fundamental characteristic of GAN s is their sensitivity to parameter initialization, known as seeds. Each distinct seed utilized during training leads to the creation of unique model instances, resulting in divergent image outputs despite employing the same architecture. This means that even if we have one GAN architecture, it can produce countless variations of GAN models depending on the seed used. Existing methods for attributing deepfakes work well only if they have seen the specific GAN model during training. If the GAN architectures are retrained with a different seed, these methods struggle to attribute the fakes. This seed dependency issue made it difficult to attribute deepfakes with existing methods. We proposed a generalized deepfake attribution network (GDA-N et) to attribute fake images to their respective GAN architectures, even if they are generated from a retrained version of the GAN architecture with a different seed (cross-seed) or from the fine-tuned version of the existing GAN model. Extensive experiments on cross-seed and fine-tuned data of GAN models show that our method is highly effective compared to existing methods. We have provided the source code to validate our results.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsOgjAYBeAuDgZ9ACd5AbC09DYSvCYmLuykLX-TRkRS0ahP73U6OcM5-RCaZTjNJWN4ocPd31KSY55mkgg5RvEGOgi69U9o4iVA7_QR4mIYgjfXwZ-7CRo53V5g-s8IVetVVW6T_WGzK4t9ormQicgc5pRgDMRaqgRXmVOSAGPCGW4sx6zJjbLUEiJEY9S7WvyeWGOINpRGaP67_RLrPviTDo_6Q62_VPoCl1031A</recordid><startdate>20240626</startdate><enddate>20240626</enddate><creator>Shahid, Sowdagar Mahammad</creator><creator>Padhi, Sudev Kumar</creator><creator>Kashyap, Umesh</creator><creator>Ali, Sk. Subidh</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240626</creationdate><title>Generalized Deepfake Attribution</title><author>Shahid, Sowdagar Mahammad ; Padhi, Sudev Kumar ; Kashyap, Umesh ; Ali, Sk. Subidh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-71f063200e2cc397691f982e557fb6bc605d4b9c3c2277db95d4c0063cbb2ab33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Shahid, Sowdagar Mahammad</creatorcontrib><creatorcontrib>Padhi, Sudev Kumar</creatorcontrib><creatorcontrib>Kashyap, Umesh</creatorcontrib><creatorcontrib>Ali, Sk. Subidh</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shahid, Sowdagar Mahammad</au><au>Padhi, Sudev Kumar</au><au>Kashyap, Umesh</au><au>Ali, Sk. Subidh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Generalized Deepfake Attribution</atitle><date>2024-06-26</date><risdate>2024</risdate><abstract>The landscape of fake media creation changed with the introduction of Generative Adversarial Networks (GAN s). Fake media creation has been on the rise with the rapid advances in generation technology, leading to new challenges in Detecting fake media. A fundamental characteristic of GAN s is their sensitivity to parameter initialization, known as seeds. Each distinct seed utilized during training leads to the creation of unique model instances, resulting in divergent image outputs despite employing the same architecture. This means that even if we have one GAN architecture, it can produce countless variations of GAN models depending on the seed used. Existing methods for attributing deepfakes work well only if they have seen the specific GAN model during training. If the GAN architectures are retrained with a different seed, these methods struggle to attribute the fakes. This seed dependency issue made it difficult to attribute deepfakes with existing methods. We proposed a generalized deepfake attribution network (GDA-N et) to attribute fake images to their respective GAN architectures, even if they are generated from a retrained version of the GAN architecture with a different seed (cross-seed) or from the fine-tuned version of the existing GAN model. Extensive experiments on cross-seed and fine-tuned data of GAN models show that our method is highly effective compared to existing methods. We have provided the source code to validate our results.</abstract><doi>10.48550/arxiv.2406.18278</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.18278
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_18278
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Generalized Deepfake Attribution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T17%3A49%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Generalized%20Deepfake%20Attribution&rft.au=Shahid,%20Sowdagar%20Mahammad&rft.date=2024-06-26&rft_id=info:doi/10.48550/arxiv.2406.18278&rft_dat=%3Carxiv_GOX%3E2406_18278%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true