We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We show that these correspondences can be improved by an additional step which improves the shape feature by minimizing the Chamfer distance between the input and transformed template. We demonstrate that our simple approach improves on state-of-the-art results on the difficult FAUST-inter challenge, with an average correspondence error of 2.88cm. We show, on the TOSCA dataset, that our method is robust to many types of perturbations, and generalizes to non-human shapes. This robustness allows it to perform well on real unclean, meshes from the the SCAPE dataset.
If you find this work useful in your research, please consider citing :
@inproceedings{groueix2018b,
title = {3D-CODED : 3D Correspondences by Deep Deformation},
author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
booktitle = {ECCV},
year = {2018}
}
Our approach has three main steps. First, from the input shape (a ), a feed-forward pass through out encoder network generates an initial global shape descriptor ( b ). Second, we use gradient descent through our decoder network to refine this shape descriptor to improve the reconstruction quality ( c ). We can then use the template to correspond points between any two input shapes using their final reconstruction ( d ).
Our method works on real scans with noise and potentially large holes. It is demonstrated on SCAPE dataset. Our method is robust to many types of perturbations namely, sampling rate, holes, topology, isometry, scaling, and added noise.
Our methods works on any kind of deformable shapes, as long as enough data is gathered to train it.
We provide source code for the project on http://github.com/ThibaultGROUEIX/3D-CODED.
We used SMPL synthetic humans and SMAL synthetic animals to train. More details are in the paper and in the sources.
We used SCAPE, TOSCA, and FAUST for testing.