Positioning a coarse calibrated camera with respect to an unknown object by 2D 1/2 visual servoing

Contact: Ezio Malis, François Chaumette

Creation Date December 1999

Demonstration description

Consider one of the typical applications of visual servoing: positioning a camera mounted on a robot end-effector relative to a target, for a grasping task for instance. The positioning task is divided into two steps:

After convergence, the camera is in the same position, with respect to the object, as in the learning step and thus the positioning task is achieved.

Desired Image Initial Image

The robot is controlled using a new vision-based robot control approach halfway between the classical position-based and image-based visual servoings. This new method, called 2D 1/2 visual servoing, allows to avoid their respective disadvantages. The homography between some planar feature points extracted from two images (corresponding to the current and desired camera poses) is computed at each iteration [1]. From the homography, the rotation of the camera and the extended image coordinates (i.e. the classical image coordinates and the relative depth) are used to design a closed-loop control law controlling the six camera d.o.f.. Contrarily to the position-based visual servoing, our scheme does not need any geometric 3D model of the object. Furthermore and contrarily to the image-based visual servoing, our approach ensures the convergence of the control law in all the task space (i.e. the half space in front of the target) [2].

Experimental Results

The control law has been tested on a seven d.o.f. industrial robot MITSUBISHI PA10 (at EDF DER Chatou) and a six d.o.f. Cartesian robot AFMA (at IRISA). The homography estimation algorithm has been tested on the target illustrated in the previous figure. The obtained estimation is sufficiently accurate to ensure the visual servoing convergence even for very large camera displacements. The decreasing of the error on the extended image coordinates and the rotational error are plotted in the following Figure, with the color code x blue, y green, z red (click on the images to have a figure enlarged):

Error on the extended image coordinates(pixels) Rotational error(dg)

The obtained results are particularly stable and robust. The translational and rotational velocities, are plotted, with the same color code, in the following Figure:

Translational velocity (cm/s) Rotational velocity (dg/s)

The error on the image coordinates of each target point and the corresponding trajectory in the image (the red circles represent the final position and the blue diamonds represent the initial position) are given in the following Figure:

Error on the coordinates of the target points (pixels) Trajectory of target points in the image (pixels)

We can note the convergence of the coordinates to their desired values (the control scheme is stopped when maximal error is less than 0.5 pixels) which demonstrates the correct realization of the task.


The main application of this work is the control of the six end-effector d.o.f. of a robot for any tracking or positioning task: for example the automatic manipulation of maintenance tools in nuclear power plants.

Scientific Context

This demonstration is part of the visual servoing theme developed in the VISTA team. The algorithm of the homography estimation used in this task has been realized within the framework of the projective reconstruction common to the TEMICS team.


This work is supported by the national French Company of Electricity Power (EDF).


  1. E. Malis, F. Chaumette and S. Boudet. Positioning a coarse-calibrated camera with respect to an unknown object by 2D 1/2 visual servoing, IEEE Int. Conf. on Robotics and Automation, Lueven, Belgium, May 1998.
  2. E. Malis, F. Chaumette, S. Boudet. 2D 1/2 visual servoing.  Technical report IRISA, No1166, February 1998.(postscript)

| Lagadic | Map | Team | Publications | Demonstrations |
Irisa - Inria - Copyright 2009 Lagadic Project