Learning to Guide Local Feature Matches

3DV 2020

François Darmon1, 2Mathieu Aubry2 Pascal Monasse2

1Thales LAS France  |  2 LIGM Ecole des Ponts, Univ Gustave Eiffel, CNRS, France

Paper |  Short video |  Long video |  Slides


We tackle the problem of finding accurate and robust keypoint correspondences between images. We propose a learning-based approach to guide local feature matches via a learned approximate image matching. Our approach can boost the results of SIFT to a level similar to state-of-the-art deep descriptors, such as Superpoint, ContextDesc, or D2-Net and can improve performance for these descriptors. We introduce and study different levels of supervision to learn coarse correspondences. In particular, we show that weak supervision from epipolar geometry leads to performances higher than the stronger but more biased point level supervision and is a clear improvement over weak image level supervision. We demonstrate the benefits of our approach in a variety of conditions by evaluating our guided keypoint correspondences for localization of internet images on the YFCC100M dataset and indoor images on theSUN3D dataset, for robust localization on the Aachen day-night benchmark and for 3D reconstruction in challenging conditions using the LTLL historical image data.


  title={Learning to Guide Local Feature Matches},
  author={Darmon, Fran\c{c}ois and Aubry, Mathieu and Monasse, Pascal},


François Darmon was supported by a CIFRE PhD grant from Thales LAS France and Mathieu Aubry by ANR project EnHerit ANR-17-CE23-0008. We thank Bénédicte Bascle and Jean-Clément Devaux from Thales LAS for helpful discussions.