Extending user control for image stylization using hierarchical style transfer networks
The field of neural style transfer refers to the re-rendering of content image while fusing the features of a style image. The recent studies either focus on multiple style transfer or arbitrary style transfer while using perceptual and fixpoint content losses in their respective network architectur...
Gespeichert in:
Veröffentlicht in: | Heliyon 2024-03, Vol.10 (5), p.e27012, Article e27012 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The field of neural style transfer refers to the re-rendering of content image while fusing the features of a style image. The recent studies either focus on multiple style transfer or arbitrary style transfer while using perceptual and fixpoint content losses in their respective network architectures. The aforementioned losses provide notable stylization results but lack the liberty of style control to the user. Consequently, the stylization results also compromise the preservation of details with respect to the content image. This work proposes the hierarchical style transfer network (HSTN) for the image stylization task that could provide the user with the liberty to control the degree of incurred style via denoising parameter. The HSTN incorporates the proposed fixpoint control loss that preserves details from the content image and the addition of denoising CNN network (DnCNN) and denoising loss for allowing the user to control the level of stylization. The encoder-decoder block, the DnCNN block, and the loss network block make the basic building blocks of HSTN. Extensive experiments have been carried out, and the results are compared with existing works to demonstrate the effectiveness of HSTN. The subjective user evaluation shows that the HSTN's stylization represents the best fusion of style and generates unique stylization results while preserving the content image details, which is evident by acquiring 12% better results than the second-best performing method. It has also been observed that the proposed work is amongst the studies that achieve the best trade-off regarding content and style classification scores, i.e. 37.64% and 60.27%, respectively. |
---|---|
ISSN: | 2405-8440 2405-8440 |
DOI: | 10.1016/j.heliyon.2024.e27012 |