Contact: Romeo Tatsambon Fomena
Creation Date: March 2008
Visual servoing is exploited to move an eye-in-hand system to a position where the desired pattern of the image of the target can be observed. We use a fisheye camera which is a particular omnidirectional camera that exploits a short focal length to obtain a wide field of view. The target is 9.5 cm radius white soccer ball marked with two points on its surface. Using such simple object allows to easily compute the selected visual features on the image of the target at video rate without any image processing problem. The desired features have been computed after moving the robot to a position corresponding to the desired image. The following figures picture the desired (left) and initial (right) images used for each experiment.
Here we consider the case where the value of the radius of the ball is available. The following figures describe the system behaviour: the figure on the left pictures the trajectory of the visual features error while the figure on the right plots the camera velocities. The video for this case is available here
This work is concerned with modeling issues in visual servoing. The challenge is to find optimal visual features for visual servoing. The optimality criteria being: local and -as far as possible- global stability of the system, robustness to calibration and to modeling errors, none singularity nor local minima, satisfactory trajectory of the system and of the measures in the image, and finally linear link and maximal decoupling between the visual features and the degrees of freedom taken into account.
A spherical projection model is proposed to search for optimal visual features. Compared to the perspective or omnidirectional projection models, determining optimal features with this projection model is quite easy and intuitive. Using this projection model, a new (2 1/2D,3D) minimal set of six features has been designed for visual servoing from a sphere marked with two points on its surface (see figure below). Using this new set, a classical control law has been proved to be globally stable to modeling error: in practice, it means that, is not necessary to use the real radius of the sphere target; a rough estimate is sufficient. The new set can be computed from the image plane of most omnidirectional cameras using spherical moments.