This page presents a real-time, robust and efficient 3D model-based tracking algorithm for visual servoing. A virtual visual servoing approach is used for monocular 3D tracking. This method is similar to more classical non-linear pose computation techniques. Robustness is obtained by integrating a M-estimator into the virtual visual control law via an iteratively re-weighted least squares implementation.
Results of the virtual visual servoing tracker are presented in Real-time 3D localisation and tracking page.
In this page we present results of the extension of this approach to the use of multiple cameras. Results show the method to be robust to occlusion, changes in illumination and miss-tracking.
The context of this work is the development of robust and fast 3D tracking algorithms for visual servoing application is spatial context. Indeed the goal is to develop a demonstrator of a robot arm in space environment able to grasp objects by visual servoing (Vimanco project). The considered robot is the ESA Eurobot. The configuration of this robot (3 arms) allows to consider multiple cameras (with wide baseline) in order to allow eye-in-hand or eye-to-hand control.
As this point the tracking and visual servoing capabilities have been tested at INRIA Rennes using a classical 6-axis robot. Further tests using the Eurobot would be done within few months at ESA-ESTEC in the Netherlands on the ISS testbed. Within this paper, we consider an object named Articulated Portable Foot Restraint (APFR). This is a quite complex non-polyhedric object as can be seen in the image on the right.
APFR (Articulated portable foot restraint), APFR courtesy of European Space Agency
The following experiments consist in a positioning task of the robot end effector with respect to the APFR by using a 3D visual servoing for different CCD camera setups:
|Monocular system||Small base-line stereoscopic system||Wide base-line stereoscopic system|
The monocular tracker is described in :