SimNP: Learning Self-Similarity Priors Between Neural Points
Existing neural field representations for 3D object reconstruction either (1) utilize object-level representations, but suffer from low-quality details due to conditioning on a global latent code, or (2) are able to perfectly reconstruct the observations, but fail to utilize object-level prior knowl...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing neural field representations for 3D object reconstruction either (1)
utilize object-level representations, but suffer from low-quality details due
to conditioning on a global latent code, or (2) are able to perfectly
reconstruct the observations, but fail to utilize object-level prior knowledge
to infer unobserved regions. We present SimNP, a method to learn category-level
self-similarities, which combines the advantages of both worlds by connecting
neural point radiance fields with a category-level self-similarity
representation. Our contribution is two-fold. (1) We design the first neural
point representation on a category level by utilizing the concept of coherent
point clouds. The resulting neural point radiance fields store a high level of
detail for locally supported object regions. (2) We learn how information is
shared between neural points in an unconstrained and unsupervised fashion,
which allows to derive unobserved regions of an object during the
reconstruction process from given observations. We show that SimNP is able to
outperform previous methods in reconstructing symmetric unseen object regions,
surpassing methods that build upon category-level or pixel-aligned radiance
fields, while providing semantic correspondences between instances |
---|---|
DOI: | 10.48550/arxiv.2309.03809 |