DreamSim: Learning New Dimensions of Human Visual Similarity using Synthetic Data
Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object pose, and semantic content. In this paper, we develop a p...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Current perceptual similarity metrics operate at the level of pixels and
patches. These metrics compare images in terms of their low-level colors and
textures, but fail to capture mid-level similarities and differences in image
layout, object pose, and semantic content. In this paper, we develop a
perceptual metric that assesses images holistically. Our first step is to
collect a new dataset of human similarity judgments over image pairs that are
alike in diverse ways. Critical to this dataset is that judgments are nearly
automatic and shared by all observers. To achieve this we use recent
text-to-image models to create synthetic pairs that are perturbed along various
dimensions. We observe that popular perceptual metrics fall short of explaining
our new data, and we introduce a new metric, DreamSim, tuned to better align
with human perception. We analyze how our metric is affected by different
visual attributes, and find that it focuses heavily on foreground objects and
semantic content while also being sensitive to color and layout. Notably,
despite being trained on synthetic data, our metric generalizes to real images,
giving strong results on retrieval and reconstruction tasks. Furthermore, our
metric outperforms both prior learned metrics and recent large vision models on
these tasks. |
---|---|
DOI: | 10.48550/arxiv.2306.09344 |