NeuralWarp: Improving neural implicit surfaces geometry with patch warping

CVPR 2022 (Poster)

François Darmon1, 2   Bénédicte Bascle1   Jean-Clément Devaux1Pascal Monasse2Mathieu Aubry2

1Thales LAS France  |  2 LIGM Ecole des Ponts, Univ Gustave Eiffel, CNRS, France
teaser.jpg

Paper |  Code

Abstract


Neural implicit surfaces have become an important technique for multi-view 3D reconstruction but their accuracy remains limited. In this paper, we argue that this comes from the difficulty to learn and render high frequency textures with neural networks. We thus propose to add to the standard neural rendering optimization a direct photo-consistency term across the different views. Intuitively, we optimize the implicit geometry so that it warps views on each other in a consistent way. We demonstrate that two elements are key to the success of such an approach: (i) warping entire patches, using the predicted occupancy and normals of the 3D points along each ray, and measuring their similarity with a robust structural similarity (SSIM); (ii) handling visibility and occlusion in such a way that incorrect warps are not given too much importance while encouraging a reconstruction as complete as possible. We evaluate our approach, dubbed NeuralWarp, on the standard DTU and EPFL benchmarks and show it outperforms state of the art unsupervised implicit surfaces reconstructions by over 20% on both datasets.

Visual results



Bibtex

          @inproceedings{
            author    = {Darmon, Fran{\c{c}}ois  and
                         Bascle, B{\'{e}}n{\'{e}}dicte  and
                         Devaux, Jean{-}Cl{\'{e}}ment  and
                         Monasse, Pascal  and
                         Aubry, Mathieu},
            title     = {Improving neural implicit surfaces geometry with patch warping},
            year      = {2022},
            booktitle = CVPR,
          }

Acknowledgements

This work was supported in part by ANR project EnHerit ANR-17-CE23-0008 and was performed using HPC resources from GENCI–IDRIS 2021-AD011011756R1. We thank Tom Monnier and Bruno Lecouat for valuable feedback, and Jingyang Zhang for sending MVSDF results.