SimNP: Learning Self-Similarity Priors between Neural Points

Christopher Wewer1, Eddy Ilg2, Bernt Schiele1, Jan Eric Lenssen1,
1 Max Planck Institute for Informatics, Saarland Informatics Campus, Germany
2 University of Saarland, Germany
ICCV 2023
SimNP: Learning Self-Similarity Priors between Neural Points. a) SimNP is a renderable neural point radiance field that learns category-level self-similarities from data by connecting neural points to embeddings via optimized bipartite attention scores. b) The learned self-similarities can be used to transfer details from single- or few-view observations to unobserved, similar and symmetric parts of objects.

Abstract

Existing neural field representations for 3D object reconstruction either (1) utilize object-level representations, but suffer from low-quality details due to conditioning on a global latent code, or (2) are able to perfectly reconstruct the observations, but fail to utilize object-level prior knowledge to infer unobserved regions. We present SimNP, a method to learn category-level self-similarities, which combines the advantages of both worlds by connecting neural point radiance fields with a category-level self-similarity representation. Our contribution is two-fold. (1) We design the first neural point representation on a category level by utilizing the concept of coherent point clouds. The resulting neural point radiance fields store a high level of detail for locally supported object regions. (2) We learn how information is shared between neural points in an unconstrained and unsupervised fashion, which allows to derive unobserved regions of an object during the reconstruction process from given observations. We show that SimNP is able to outperform previous methods in reconstructing symmetric unseen object regions, surpassing methods that build upon category-level or pixel-aligned radiance fields, while providing semantic correspondences between instances.