Spatial Video Forgery Detection and Localization using Texture Analysis of Consecutive Frames

Now-a-days, videos can be easily recorded and forged with user-friendly editing tools. These videos can be shared on social networks to make false propaganda. During the process of spatial forgery, the texture and micro-patterns of the frames become inconsistent, which can be observed in the differe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Advances in electrical and computer engineering 2019-01, Vol.19 (3), p.97-108
Hauptverfasser: SADDIQUE, M., ASGHAR, K., BAJWA, U. I., HUSSAIN, M., HABIB, Z.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Now-a-days, videos can be easily recorded and forged with user-friendly editing tools. These videos can be shared on social networks to make false propaganda. During the process of spatial forgery, the texture and micro-patterns of the frames become inconsistent, which can be observed in the difference of two consecutive frames. Based on this observation, a method has been proposed for detection of forged video segments and localization of forged frames. Employing the Chrominance value of Consecutive frame Difference (CCD) and Discriminative Robust Local Binary Pattern (DRLBP), a new descriptor is introduced to model the inconsistency embedded in the frames due to forgery. Support Vector Machine (SVM) is used to detect whether the pair of consecutive frames is forged. If at least one pair of consecutive frames is detected as forged, the video segment is predicted as forged and the forged frames are localized. Intensive experiments are performed to validate the performance of the method on a combined dataset of videos, which were tampered by copy-move and splicing methods. The detection accuracy on large dataset is 96.68 percent and video accuracy is 98.32 percent. The comparison shows that it outperforms the stateof-the-art methods, even through cross dataset validation
ISSN:1582-7445
1844-7600
DOI:10.4316/AECE.2019.03012