Fake and propaganda images detection using automated adaptive gaining sharing knowledge algorithm with DenseNet121
An additional tool for swaying public opinion on social media is to present recent developments in the creation of natural language. The term “Deep fake” originates from deep learning technology, which effortlessly and seamlessly steers someone toward digital media. Artificial Intelligence (AI) tech...
Gespeichert in:
Veröffentlicht in: | Journal of ambient intelligence and humanized computing 2024-09, Vol.15 (9), p.3519-3531 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | An additional tool for swaying public opinion on social media is to present recent developments in the creation of natural language. The term “Deep fake” originates from deep learning technology, which effortlessly and seamlessly steers someone toward digital media. Artificial Intelligence (AI) techniques are a crucial component of deep fakes. The generative powers of generative capabilities greatly reinforce the advancements in language modeling for content generation. Due to low-cost computing infrastructure, sophisticated tools, and readily available content on social media, deep fakes propagate misinformation and hoaxes. These technologies make it simple to produce misinformation that spreads fear and confusion to everyone. As such, distinguishing between authentic and fraudulent content can be challenging. This study presents a novel automated approach for the identification of deep fakes, based on Adaptive Gaining Sharing Knowledge (AGSK) and using DenseNet121 architecture. During pre-processing, the image’s sensitive data variance or noise is eliminated. Following that, CapsuleNet is used to extract the feature vectors. The deep fake is identified from the resulting feature vectors by an AGSK with DenseNet121 architecture, together with the hyper-parameter that has been optimized using the AGSK model. Propaganda and defamation pose less of a concern, and the results of the suggested deepfake image recognition model show how reliable and successful the model is. The accuracy of detection is 98% higher than other cutting-edge models. |
---|---|
ISSN: | 1868-5137 1868-5145 |
DOI: | 10.1007/s12652-024-04829-4 |