Detecting Video Inter-Frame Forgeries Based on Convolutional Neural Network Model

In the era of information extension today, videos are easily captured and made viral in a short time, and video tampering has become more comfortable due to editing software. So, the authenticity of videos becomes more essential. Video inter-frame forgeries are the most common type of video forgery...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of image, graphics and signal processing graphics and signal processing, 2020-06, Vol.12 (3), p.1-12
Hauptverfasser: Hau Nguyen, Xuan, Hu, Yongjian, Ahmad Amin, Muhmmad, Gohar Hayat, Khan, Thinh Le, Van, Truong, Dinh-Tu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the era of information extension today, videos are easily captured and made viral in a short time, and video tampering has become more comfortable due to editing software. So, the authenticity of videos becomes more essential. Video inter-frame forgeries are the most common type of video forgery methods, which are difficult to detect by the naked eye. Until now, some algorithms have been suggested for detecting inter-frame forgeries based on handicraft features, but the accuracy and processing speed of those algorithms are still challenging. In this paper, we are going to put forward a video forgery detection method for detecting video inter-frame forgeries based on convolutional neural network (CNN) models by retraining the available CNN model trained on ImageNet dataset. The proposed method based on state-the-art CNN models, which are retrained to exploit spatial-temporal relationships in a video to detect inter-frame forgeries robustly and we have also proposed a confidence score instead of the raw output score based on these networks for increasing accuracy of the proposed method. Through the experiments, the detection accuracy of the proposed method is 99.17%. This result has shown that the proposed method has significantly higher efficiency and accuracy than other recent methods.
ISSN:2074-9074
2074-9082
DOI:10.5815/ijigsp.2020.03.01