Harmonized Portrait‐Background Image Composition

Portrait‐background image composition is a widely used operation in selfie editing, video meeting, and other portrait applications. To guarantee the realism of the composited images, the appearance of the foreground portraits needs to be adjusted to fit the new background images. Existing image harm...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum 2023-09, Vol.42 (6), p.n/a
Hauptverfasser: Wang, Yijiang, Li, Yuqi, Wang, Chong, Ye, Xulun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Portrait‐background image composition is a widely used operation in selfie editing, video meeting, and other portrait applications. To guarantee the realism of the composited images, the appearance of the foreground portraits needs to be adjusted to fit the new background images. Existing image harmonization approaches are proposed to handle general foreground objects, thus lack the special ability to adjust portrait foregrounds. In this paper, we present a novel end‐to‐end network architecture to learn both the content features and style features for portrait‐background composition. The method adjusts the appearance of portraits to make them compatible with backgrounds, while the generation of the composited images satisfies the prior of a style‐based generator. We also propose a pipeline to generate high‐quality and high‐variety synthesized image datasets for training and evaluation. The proposed method outperforms other state‐of‐the‐art methods both on the synthesized dataset and the real composited images and shows robust performance in video applications. This paper presents a novel end‐to‐end network architecture for portrait‐background composition. The method adjusts the appearance of portraits to make them compatible with backgrounds, while the generation of the composited images satisfies the prior of a style‐based generator. The proposed method outperforms other state‐of‐the‐art methods on the synthesized dataset and the real composited images and shows robust performance in video applications.
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.14921