Subpixel Deblurring of Anti‐Aliased Raster Clip‐Art

Artist generated clip‐art images typically consist of a small number of distinct, uniformly colored regions with clear boundaries. Legacy artist created images are often stored in low‐resolution (100x100px or less) anti‐aliased raster form. Compared to anti‐aliasing free rasterization, anti‐aliasing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum 2023-05, Vol.42 (2), p.61-76
Hauptverfasser: Yang, J., Vining, N., Kheradmand, S., Carr, N., Sigal, L., Sheffer, A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Artist generated clip‐art images typically consist of a small number of distinct, uniformly colored regions with clear boundaries. Legacy artist created images are often stored in low‐resolution (100x100px or less) anti‐aliased raster form. Compared to anti‐aliasing free rasterization, anti‐aliasing blurs inter‐region boundaries and obscures the artist's intended region topology and color palette; at the same time, it better preserves subpixel details. Recovering the underlying artist‐intended images from their low‐resolution anti‐aliased rasterizations can facilitate resolution independent rendering, lossless vectorization, and other image processing applications. Unfortunately, while human observers can mentally deblur these low‐resolution images and reconstruct region topology, color and subpixel details, existing algorithms applicable to this task fail to produce outputs consistent with human expectations when presented with such images. We recover these viewer perceived blur‐free images at subpixel resolution, producing outputs where each input pixel is replaced by four corresponding (sub)pixels. Performing this task requires computing the size of the output image color palette, generating the palette itself, and associating each pixel in the output with one of the colors in the palette. We obtain these desired output components by leveraging a combination of perceptual and domain priors, and real world data. We use readily available data to train a network that predicts, for each anti‐aliased image, a low‐blur approximation of the blur‐free double‐resolution outputs we seek. The images obtained at this stage are perceptually closer to the desired outputs but typically still have hundreds of redundant differently colored regions with fuzzy boundaries. We convert these low‐blur intermediate images into blur‐free outputs consistent with viewer expectations using a discrete partitioning procedure guided by the characteristic properties of clip‐art images, observations about the antialiasing process, and human perception of anti‐aliased clip‐art. This step dramatically reduces the size of the output color palettes, and the region counts bringing them in line with viewer expectations and enabling the image processing applications we target. We demonstrate the utility of our method by using our outputs for a number of image processing tasks, and validate it via extensive comparisons to prior art. In our comparative study, participants preferred our deblurre
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.14744