DeepFake on Face and Expression Swap: A Review
Remarkable advances have been made in deep learning, leading to the emergence of highly realistic AI-generated videos known as deepfakes. Deepfakes use generative models to manipulate facial features to create modified identities or expressions with impressive realism. These synthetic media creation...
Gespeichert in:
Veröffentlicht in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE access |
container_volume | 11 |
creator | Waseem, Saima Abubakar, Syed A.R Ahmed, Bilal Ashfaq Omar, Zaid Eisa, Taiseer Abdalla Elfadil Dalam, Mhassen Elnour Elneel |
description | Remarkable advances have been made in deep learning, leading to the emergence of highly realistic AI-generated videos known as deepfakes. Deepfakes use generative models to manipulate facial features to create modified identities or expressions with impressive realism. These synthetic media creations can deceive, discredit, or blackmail individuals and threaten the integrity of the legal, political, and social systems. Consequently, researchers are actively developing techniques to detect deepfake content to preserve privacy and combat the dissemination of fabricated media. This article presents a comprehensive study examining existing methods of creating deepfake images and videos for face and expression replacement. In addition, it provides an overview of publicly available deepfake datasets for benchmarking, serving as important resources for training and evaluating deepfake detection systems. In addition, the study sheds light on the detection approaches used to identify deepfake face and expression swaps while discussing the challenges and issues involved. However, the focus of this study goes beyond identifying the existing barriers. It goes a step further by outlining future research directions and guiding future scientists to address the concerns that need to be addressed in deepfake detection methods. In this way, this paper aims to facilitate the development of robust and effective deepfake detection solutions for face and expression swaps, thereby contributing to ongoing efforts to protect the authenticity and trustworthiness of visual media. |
doi_str_mv | 10.1109/ACCESS.2023.3324403 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10285057</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10285057</ieee_id><doaj_id>oai_doaj_org_article_6b365a9cb02b4ed689fd881e4d746722</doaj_id><sourcerecordid>2884899243</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-fd8ad5403211f67a5f1d27d1ffe9962cab186801c2ce39d3a5edd3ac605e15aa3</originalsourceid><addsrcrecordid>eNpNUE1Lw0AQXUTBUvsL9BDw3Lgf2c2utxJbLRQEq-dlsjuR1NrETbX6792aIp3DzPCY997wCLlkNGWMmptJUUyXy5RTLlIheJZRcUIGnCkzFlKo06P9nIy6bkVj6QjJfEDSO8R2Bm-YNJtkBg4T2Phk-t0G7Lo6YssdtLfJJHnCrxp3F-SsgnWHo8MckpfZ9Ll4GC8e7-fFZDF2QprtuPIavIyPcMYqlYOsmOe5Z1WFxijuoGRaacocdyiMFyDRx-4UlcgkgBiSea_rG1jZNtTvEH5sA7X9A5rwaiFsa7dGq0qhJBhXUl5m6JU20V0zzHyeqZzzqHXda7Wh-fjEbmtXzWfYxPct1zrTxvBMxCvRX7nQdF3A6t-VUbvP2fY5233O9pBzZF31rBoRjxhcSypz8QvVAXaf</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2884899243</pqid></control><display><type>article</type><title>DeepFake on Face and Expression Swap: A Review</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Waseem, Saima ; Abubakar, Syed A.R ; Ahmed, Bilal Ashfaq ; Omar, Zaid ; Eisa, Taiseer Abdalla Elfadil ; Dalam, Mhassen Elnour Elneel</creator><creatorcontrib>Waseem, Saima ; Abubakar, Syed A.R ; Ahmed, Bilal Ashfaq ; Omar, Zaid ; Eisa, Taiseer Abdalla Elfadil ; Dalam, Mhassen Elnour Elneel</creatorcontrib><description>Remarkable advances have been made in deep learning, leading to the emergence of highly realistic AI-generated videos known as deepfakes. Deepfakes use generative models to manipulate facial features to create modified identities or expressions with impressive realism. These synthetic media creations can deceive, discredit, or blackmail individuals and threaten the integrity of the legal, political, and social systems. Consequently, researchers are actively developing techniques to detect deepfake content to preserve privacy and combat the dissemination of fabricated media. This article presents a comprehensive study examining existing methods of creating deepfake images and videos for face and expression replacement. In addition, it provides an overview of publicly available deepfake datasets for benchmarking, serving as important resources for training and evaluating deepfake detection systems. In addition, the study sheds light on the detection approaches used to identify deepfake face and expression swaps while discussing the challenges and issues involved. However, the focus of this study goes beyond identifying the existing barriers. It goes a step further by outlining future research directions and guiding future scientists to address the concerns that need to be addressed in deepfake detection methods. In this way, this paper aims to facilitate the development of robust and effective deepfake detection solutions for face and expression swaps, thereby contributing to ongoing efforts to protect the authenticity and trustworthiness of visual media.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3324403</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Deception ; Decoding ; Deep learning ; Deepfake ; Deepfakes ; Face detection ; Face manipulation ; Face swap ; Forensics ; Generative adversarial networks ; Generators ; Hair ; Media Forensic ; Re-enactment ; Training ; Video</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c359t-fd8ad5403211f67a5f1d27d1ffe9962cab186801c2ce39d3a5edd3ac605e15aa3</cites><orcidid>0000-0002-4360-6630 ; 0000-0003-1032-7835 ; 0000-0001-7704-7059 ; 0000-0002-3981-7558</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10285057$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2100,27632,27923,27924,54932</link.rule.ids></links><search><creatorcontrib>Waseem, Saima</creatorcontrib><creatorcontrib>Abubakar, Syed A.R</creatorcontrib><creatorcontrib>Ahmed, Bilal Ashfaq</creatorcontrib><creatorcontrib>Omar, Zaid</creatorcontrib><creatorcontrib>Eisa, Taiseer Abdalla Elfadil</creatorcontrib><creatorcontrib>Dalam, Mhassen Elnour Elneel</creatorcontrib><title>DeepFake on Face and Expression Swap: A Review</title><title>IEEE access</title><addtitle>Access</addtitle><description>Remarkable advances have been made in deep learning, leading to the emergence of highly realistic AI-generated videos known as deepfakes. Deepfakes use generative models to manipulate facial features to create modified identities or expressions with impressive realism. These synthetic media creations can deceive, discredit, or blackmail individuals and threaten the integrity of the legal, political, and social systems. Consequently, researchers are actively developing techniques to detect deepfake content to preserve privacy and combat the dissemination of fabricated media. This article presents a comprehensive study examining existing methods of creating deepfake images and videos for face and expression replacement. In addition, it provides an overview of publicly available deepfake datasets for benchmarking, serving as important resources for training and evaluating deepfake detection systems. In addition, the study sheds light on the detection approaches used to identify deepfake face and expression swaps while discussing the challenges and issues involved. However, the focus of this study goes beyond identifying the existing barriers. It goes a step further by outlining future research directions and guiding future scientists to address the concerns that need to be addressed in deepfake detection methods. In this way, this paper aims to facilitate the development of robust and effective deepfake detection solutions for face and expression swaps, thereby contributing to ongoing efforts to protect the authenticity and trustworthiness of visual media.</description><subject>Deception</subject><subject>Decoding</subject><subject>Deep learning</subject><subject>Deepfake</subject><subject>Deepfakes</subject><subject>Face detection</subject><subject>Face manipulation</subject><subject>Face swap</subject><subject>Forensics</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Hair</subject><subject>Media Forensic</subject><subject>Re-enactment</subject><subject>Training</subject><subject>Video</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUE1Lw0AQXUTBUvsL9BDw3Lgf2c2utxJbLRQEq-dlsjuR1NrETbX6792aIp3DzPCY997wCLlkNGWMmptJUUyXy5RTLlIheJZRcUIGnCkzFlKo06P9nIy6bkVj6QjJfEDSO8R2Bm-YNJtkBg4T2Phk-t0G7Lo6YssdtLfJJHnCrxp3F-SsgnWHo8MckpfZ9Ll4GC8e7-fFZDF2QprtuPIavIyPcMYqlYOsmOe5Z1WFxijuoGRaacocdyiMFyDRx-4UlcgkgBiSea_rG1jZNtTvEH5sA7X9A5rwaiFsa7dGq0qhJBhXUl5m6JU20V0zzHyeqZzzqHXda7Wh-fjEbmtXzWfYxPct1zrTxvBMxCvRX7nQdF3A6t-VUbvP2fY5233O9pBzZF31rBoRjxhcSypz8QvVAXaf</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Waseem, Saima</creator><creator>Abubakar, Syed A.R</creator><creator>Ahmed, Bilal Ashfaq</creator><creator>Omar, Zaid</creator><creator>Eisa, Taiseer Abdalla Elfadil</creator><creator>Dalam, Mhassen Elnour Elneel</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4360-6630</orcidid><orcidid>https://orcid.org/0000-0003-1032-7835</orcidid><orcidid>https://orcid.org/0000-0001-7704-7059</orcidid><orcidid>https://orcid.org/0000-0002-3981-7558</orcidid></search><sort><creationdate>20230101</creationdate><title>DeepFake on Face and Expression Swap: A Review</title><author>Waseem, Saima ; Abubakar, Syed A.R ; Ahmed, Bilal Ashfaq ; Omar, Zaid ; Eisa, Taiseer Abdalla Elfadil ; Dalam, Mhassen Elnour Elneel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-fd8ad5403211f67a5f1d27d1ffe9962cab186801c2ce39d3a5edd3ac605e15aa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Deception</topic><topic>Decoding</topic><topic>Deep learning</topic><topic>Deepfake</topic><topic>Deepfakes</topic><topic>Face detection</topic><topic>Face manipulation</topic><topic>Face swap</topic><topic>Forensics</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Hair</topic><topic>Media Forensic</topic><topic>Re-enactment</topic><topic>Training</topic><topic>Video</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Waseem, Saima</creatorcontrib><creatorcontrib>Abubakar, Syed A.R</creatorcontrib><creatorcontrib>Ahmed, Bilal Ashfaq</creatorcontrib><creatorcontrib>Omar, Zaid</creatorcontrib><creatorcontrib>Eisa, Taiseer Abdalla Elfadil</creatorcontrib><creatorcontrib>Dalam, Mhassen Elnour Elneel</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Waseem, Saima</au><au>Abubakar, Syed A.R</au><au>Ahmed, Bilal Ashfaq</au><au>Omar, Zaid</au><au>Eisa, Taiseer Abdalla Elfadil</au><au>Dalam, Mhassen Elnour Elneel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeepFake on Face and Expression Swap: A Review</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Remarkable advances have been made in deep learning, leading to the emergence of highly realistic AI-generated videos known as deepfakes. Deepfakes use generative models to manipulate facial features to create modified identities or expressions with impressive realism. These synthetic media creations can deceive, discredit, or blackmail individuals and threaten the integrity of the legal, political, and social systems. Consequently, researchers are actively developing techniques to detect deepfake content to preserve privacy and combat the dissemination of fabricated media. This article presents a comprehensive study examining existing methods of creating deepfake images and videos for face and expression replacement. In addition, it provides an overview of publicly available deepfake datasets for benchmarking, serving as important resources for training and evaluating deepfake detection systems. In addition, the study sheds light on the detection approaches used to identify deepfake face and expression swaps while discussing the challenges and issues involved. However, the focus of this study goes beyond identifying the existing barriers. It goes a step further by outlining future research directions and guiding future scientists to address the concerns that need to be addressed in deepfake detection methods. In this way, this paper aims to facilitate the development of robust and effective deepfake detection solutions for face and expression swaps, thereby contributing to ongoing efforts to protect the authenticity and trustworthiness of visual media.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3324403</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4360-6630</orcidid><orcidid>https://orcid.org/0000-0003-1032-7835</orcidid><orcidid>https://orcid.org/0000-0001-7704-7059</orcidid><orcidid>https://orcid.org/0000-0002-3981-7558</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2023-01, Vol.11, p.1-1 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_10285057 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Deception Decoding Deep learning Deepfake Deepfakes Face detection Face manipulation Face swap Forensics Generative adversarial networks Generators Hair Media Forensic Re-enactment Training Video |
title | DeepFake on Face and Expression Swap: A Review |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T18%3A27%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeepFake%20on%20Face%20and%20Expression%20Swap:%20A%20Review&rft.jtitle=IEEE%20access&rft.au=Waseem,%20Saima&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3324403&rft_dat=%3Cproquest_ieee_%3E2884899243%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2884899243&rft_id=info:pmid/&rft_ieee_id=10285057&rft_doaj_id=oai_doaj_org_article_6b365a9cb02b4ed689fd881e4d746722&rfr_iscdi=true |