Deeply‐Recursive Attention Network for video steganography

Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames. Imperceptibility is the first and foremost requirement of any steganographic approach. Inspired by the fact that human eyes perceiv...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:CAAI Transactions on Intelligence Technology 2023-12, Vol.8 (4), p.1507-1523
Hauptverfasser: Cui, Jiabao, Zheng, Liangli, Yu, Yunlong, Lin, Yining, Ni, Huajian, Xu, Xin, Zhang, Zhongfei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames. Imperceptibility is the first and foremost requirement of any steganographic approach. Inspired by the fact that human eyes perceive pixel perturbation differently in different video areas, a novel effective and efficient Deeply‐Recursive Attention Network (DRANet) for video steganography to find suitable areas for information hiding via modelling spatio‐temporal attention is proposed. The DRANet mainly contains two important components, a Non‐Local Self‐Attention (NLSA) block and a Non‐Local Co‐Attention (NLCA) block. Specifically, the NLSA block can select the cover frame areas which are suitable for hiding by computing the correlations among inter‐ and intra‐cover frames. The NLCA block aims to effectively produce the enhanced representations of the secret frames to enhance the robustness of the model and alleviate the influence of different areas in the secret video. Furthermore, the DRANet reduces the model parameters by performing similar operations on the different frames within an input video recursively. Experimental results show the proposed DRANet achieves better performance with fewer parameters than the state‐of‐the‐art competitors.
ISSN:2468-2322
2468-2322
DOI:10.1049/cit2.12191