MEt3R: Measuring Multi-View Consistency in Generated Images

1Max Planck Institute for Informatics, Saarland Informatics Campus, Germany, 2ETH Zurich

  • Pose-Free Multi-View Consistency Metric
  • Evaluate consistency of generated novel views
  • Evaluate consistency of generated videos
  • Robust to varying image resolutions
  • Plug-n-Play: Easy to use

Abstract

We introduce MEt3R, a metric for multi-view consistency in generated images. Large-scale generative models for multi-view image generation are rapidly advancing the field of 3D inference from sparse observations. However, due to the nature of generative modeling, traditional reconstruction metrics are not suitable to measure the quality of generated outputs and metrics that are independent of the sampling procedure are desperately needed. In this work, we specifically address the aspect of consistency between generated multi-view images, which can be evaluated independently of the specific scene. Our approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a feed-forward manner, which are used to warp image contents from one view into the other. Then, feature maps of these images are compared to obtain a similarity score that is invariant to view-dependent effects. Using MEt3R, we evaluate the consistency of a large set of previous methods for novel view and video generation, including our open, multi-view latent diffusion model.

Method

Description of the image

Method overview. Our metric evaluates the consistency between images I1 and I2. Given such a pair, we apply DUSt3R to obtain dense 3D point maps X1 and X2. The point maps are used to project upscaled DINO features F1, F2 into the coordinate frame of I1, via unprojecting and rendering. We compare the resulting feature maps 1 and 2 in pixel space to obtain similarity S(I1, I2).

Results on RealEstate10K

Additional visualization of MEt3R on consecutive image pairs progressing through the generated image and video sequences. We consider two sets of baselines i.e., GenWarp, PhotoNVS, MV-LDM, and DFM for multi-view and I2VGen-XL, Ruyi-Mini-7B, and SVD for video generation.

Evaluating Multi-View Generation Models



Comparison with Existing Metrics

MEt3R is able to capture the variation in 3D consistency without requiring camera poses unlike SED and TSED. Whereas FVD cannot be evaluated on per-image-pair basis since it relies on a collection of frames and thus is sensitive towards the sample size similar to FID.

Description of the image


Evaluating Video Generation Models



MEt3R Map Visualizations

We visualize MEt3R on individual examples for GenWarp, PhotoNVS, MV-LDM (ours) and DFM. Each error map shows the consistency of the corresponding left and right view while the left view is used as the reference view for re-projection. We show that the regions with larger inconsistencies gets higher value of MEt3R denoted by the intensity of the score map.




Acknowledgments

This project was partially funded by the Saarland/Intel Joint Program on the Future of Graphics and Media. Thomas Wimmer is supported through the Max Planck ETH Center for Learning Systems.

BibTeX

@misc{asim24met3r,
    title = {MEt3R: Measuring Multi-View Consistency in Generated Images},
    author = {Asim, Mohammad and Wewer, Christopher and Wimmer, Thomas and Schiele, Bernt and Lenssen, Jan Eric},
    booktitle = {arXiv},
    year = {2024},
}