ARWGAN: Attention-guided Robust Image Watermarking Model Based on GAN

In the existing deep learning based watermarking models, extracted image features for fusing with watermark are not abundant enough and more critically, essential features are not highlighted to be learned with the purpose of robust watermarking, both of which limit the watermarking performance. To...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2023-01, Vol.72, p.1-1
Hauptverfasser: Huang, Jiangtao, Luo, Ting, Li, Li, Yang, Gaobo, Xu, Haiyong, Chang, Chin-Chen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the existing deep learning based watermarking models, extracted image features for fusing with watermark are not abundant enough and more critically, essential features are not highlighted to be learned with the purpose of robust watermarking, both of which limit the watermarking performance. To solve those two drawbacks, this paper proposes an attention-guided robust image watermarking model based on generative adversarial network (ARWGAN). To acquire a great deal of representational image features, a feature fusion module (FFM) is devised to learn shallow and deep features effectively for multi-layer fusion with watermark, and meanwhile, reuse of those features by the dense connection enhances robustness. To alleviate image distortion caused by embedding watermark, an attention module (AM) is deployed to compute the attention mask by mining the global features of the original image. Specifically, with the guidance of the attention mask, image features representing inconspicuous regions and texture regions are enhanced for embedding the high strength of watermark, and simultaneously other features are suppressed to improve the watermarking performance. Furthermore, the noise sub-network is adopted for robustness enhancement by simulating various image attacks in iterative training. The discriminator is used to distinguish the encoded image from the original image for improving watermarking invisibility continuously. Experimental results demonstrate that ARWGAN is superior to the existing state-of-the-art watermarking models, and ablation experiments prove the effectiveness of the FFM and the AM. The code is avaliable in https://github.com/river-huang/ARWGAN.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2023.3285981