Contact : Eric Marchand
Date : 2011
In this work we propose a new information theoretic approach to achieve visual servoing directly utilizing the information (as defined by Shannon) contained in the images. A metric derived from information theory, mutual information, is considered. Mutual information is widely used in multi-modal image registration since it is insensitive to changes in the lighting condition and to a wide class of non-linear image transformation.
In this work mutual-information is used as a new visual feature for visual servoing which allows us to build a new control law that can control the 6 degrees of freedom of a robot. Among various advantages, this approach does not require any matching nor tracking step, is robust to large illumination variation and allows consideration of different image modalities within the same task.
Several platforms are available to perform the validation of our approaches. The MI-VS has been tested on a 6 dof gantry robot. The algorithm is converging with a 3D trajectory close to the geodesic and a final positioning error of about 0.1 mm in translation and 0.1° in rotation with a distance of 1 meter between the camera and the scene. Two typical experiments are presented in this video.
In this work we propose a new way to achieve a navigation task (visual path following) for a non-holonomic vehicle. We consider an image-based navigation process. We show that it is possible to navigate along a visual path without relying on the extraction, matching and tracking of geometric visual features such as keypoint. The new proposed approach relies directly on the information (entropy) contained in the image signal. We show that it is possible to build a control law directly from the maximisation of the shared information between the current image and the next key image in the visual path. The shared information between those two images is obtained using mutual information that is known to be robust to illumination variations and occlusions.
The mutual information control scheme has been tested on a non-holonomic vehicle in an outdoor environment. Let us emphasize that the vehicle is equipped with a monocular camera and that no other sensor such as GPS, radar or odometry are considered in these experiments. Furthermore, the 3D structure of the scene remains fully unknown during the learning and navigation steps. Consdering the vehicle speed during the learning step, a keyframe has been acquired each meter. Aerial views of the environment, where the navigation task takes place, along with the considered trajectory (about 400 meters). As seen on the pictures, the environment is semi-urban with both trees and buildings (with windows acting as repetitive textures). Let us note that the vehicle crosses a covered parking lot (green part of the trajectory) and that the ground is no longer perfectly flat (mainly in the first 100 meters of the trajectory).