Perceptual Video Hashing with Secure Anti-Noise Model for Social Video Retrieval
In real scenarios, videos are usually corrupted by multiple types of noise, which brings great challenges to retrieving social videos. However, most of the current video hashing methods for video retrieval consider the attack of a single noise model, and rarely discuss when dealing with complex nois...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2024-01, Vol.11 (2), p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In real scenarios, videos are usually corrupted by multiple types of noise, which brings great challenges to retrieving social videos. However, most of the current video hashing methods for video retrieval consider the attack of a single noise model, and rarely discuss when dealing with complex noise models, which is not conducive to solving the above difficulties. Thus, we describe a novel video hashing with secure anti-noise model (SANM). To improve the robustness of noise attacks, the input video is reconstructed into a SANM by low-rank representation (LRR) and random subspace partition (RSP). LRR is useful technique for capturing the global structure of data. It focuses on recovering the underlying subspace in noisy environment and helps to make the proposed model robust to multiple noises. In addition, using chaotic mapping to control the generation of RSP can ensure the security of proposed model. Then, a new subspace decomposition descriptor (SDD) is proposed. SDD is obtained by calculating the invariant distances of the factor matrices obtained by tucker decomposition, and is used to decompose SANM to derive a compact hash. Various experiments demonstrate that the SANM hashing performs better than several state-of-the-art algorithms in terms of good robustness and discrimination, and it can accurately retrieve social videos. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2023.3293609 |