Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration



Vianney Loing
Renaud Marlet
Mathieu Aubry

Published in International Journal of Computer Vision



We formulate relative block-robot positioning using an uncalibrated camera as a classification problem where we predict the position (x, y) of the center of a block with respect to the robot base, and the angle θ between the block and the robot main axes (a). We show we can train a CNN to perform this task on synthetic images, using random poses, appearances and camera parameters (b), and then use it to perform very accurate relative positioning on real images (c).

Paper

[HAL] [PDF] [IJCV]

Abstract

Localizing an object accurately with respect to a robot is a key step for autonomous robotic manipulation. In this work, we propose to tackle this task knowing only 3D models of the robot and object in the particular case where the scene is viewed from uncalibrated cameras – a situation which would be typical in an uncontrolled environment, e.g., on a construction site. We demonstrate that this localization can be performed very accurately, with millimetric errors, without using a single real image for training, a strong advantage since acquiring representative training data is a long and expensive process. Our approach relies on a classification Convolutional Neural Network (CNN) trained using hundreds of thousands of synthetically rendered scenes with randomized parameters. To evaluate our approach quantitatively and make it comparable to alternative approaches, we build a new rich dataset of real robot images with accurately localized blocks.

Video

Citing this work

@article{Loing2018,
author = {Loing, Vianney and Marlet, Renaud and Aubry, Mathieu},
title = {Virtual Training for a Real Application: Accurate Object-Robot Relative Localization Without Calibration},
journal = {International Journal of Computer Vision},
year = {2018},
month = {Jun},
day = {21},
issn = {1573-1405},
doi = {10.1007/s11263-018-1102-6},
url = {https://doi.org/10.1007/s11263-018-1102-6}
}

Datasets and Code

The synthetic dataset used for training as well as the three UnLoc datasets ('lab', 'field', 'adv') used for validation can be downloaded [here].

PyTorch trained models presented in the paper can be download [here].

PyTorch source code for training on synthetic data and validation on UnLoc datasets is provided on https://github.com/tim885/blockEstimation/.

Lua/Torch trained models presented in the paper can be download [here].

Lua/Torch source code for validation is provided on https://github.com/VLoing/UnLoc.

CAD models of the ABB IRB120 robot are available on this [page].

Clamp 3D models are available here: [clamp.obj][clamp.stl][clamp.3ds].