Nonholonomic visual navigation

Contact: Andrea Cherubini

Creation Date: March 2010

Overview

In recent research, autonomous vehicle navigation has been often done by processing visual information. This approach is useful in urban environments, where tall buildings can disturb satellite receiving and GPS localization, while offering numerous and useful visual features. We present our recent improvements to the existing Lagadic navigation framework icone video, where a topological path is represented as a series of images.

Navigation is divided into subtasks, each consisting of reaching the next database image. We have contributed to the control scheme, by designing new models for the visual features [1, 2], by proposing a varying reference in the feedback loop [3], and by considering obstacle avoidance [4]. These works are detailed below.

Comparing appearance-based controllers for navigation

Our vehicle uses a monocular camera, and the path is represented as a series of reference images. Since the robot is equipped with only one camera, it is difficult to guarantee vehicle pose accuracy during navigation. The main contribution of [1] is the evaluation and comparison (both in the image and in the 3D pose state space) of six appearance-based controllers (one pose-based controller, and five image-based) for replaying the reference path. Experimental results, in a simulated environment, as well as on a real robot, are presented. The experiments show that the two image jacobian controllers, that exploit the epipolar geometry to estimate feature depth, outperform the four other controllers, both in the pose and in the image space. We also show that image jacobian controllers, that use uniform feature depths, prove to be effective alternatives, whenever sensor calibration or depth estimation are inaccurate. Further details, including videos of the experiments, can be found here.

Coarsely calibrated visual servoing with a catadioptric camera

A catadioptric vision system combines a camera and a mirror to achieve a wide field of view imaging system. This type of vision system has many potential applications in mobile robotics. In [2], we design a robust image-based control scheme using a catadioptric vision system mounted on a mobile robot. We exploit the fact that the decoupling property contributes to the robustness of a control method. More precisely, from the image of a point, a minimal and decoupled set of features measurable on any catadioptric vision system is presented. Using the minimal set, a classical control method is proved to be robust in the presence of point range errors. Finally, experimental results with a coarsely calibrated mobile robot validate the robustness of the new decoupled scheme. Further details, including videos of the experiments, can be found here.

Time-independent varying reference

In [3], we present a controller for visual navigation, which utilizes a time-independent varying reference in the feedback law. The navigation framework relies on a monocular camera, and the path is represented as a series of key images. The varying reference is determined using a vector field, derived from the previous and next key images. Results in a simulated environment, as well as on a real robot, show the advantages of the varying reference, with respect to a fixed one, in the image, as well as in the 3D state space. This clip icone video shows a video of the experiments carried out using the varying reference.

A Redundancy-based approach to obstacle avoidance

In [4], we propose a general framework for robot task execution, with simultaneous obstacle avoidance. Kinematic redundancy guarantees that obstacle avoidance and the primary task are independent, and the primary task can be merely sensor-based. The problem is solved both in an obstacle-free and in a dangerous context, and the control law is smoothened in the intermediate situations. The control scheme is validated in a series of simulations and real outdoor experiments. Simulations realized within Webots are shown in this clip icone video. Real outdoor experiments are shown in this clip icone video.

Acknowledgments

The work presented in this site was funded in part by the ANR CityVIP project.

References

  1. A. Cherubini, M. Colafrancesco, G. Oriolo, L. Freda, F. Chaumette, Comparing appearance-based controllers for nonholonomic navigation from a visual memory, ICRA Workshop on safe navigation in open and dynamic environments : application to autonomous vehicles, Kobe, Japan, 2009.
  2. R. Tatsambon Fomena, H. U. Yoon, A. Cherubini, F. Chaumette, S. Hutchinson, Coarsely calibrated visual servoing of a mobile robot using a catadioptric vision system, Int. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems IROS, St. Louis, U.S.A., 2009.
  3. A. Cherubini, F. Chaumette, Visual navigation with a time-independent varying reference, Int. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems IROS, St. Louis, U.S.A., 2009.
  4. A. Cherubini, F. Chaumette, A redundancy-based approach to obstacle avoidance applied to mobile robot navigation, Int. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems IROS, 2010.

| Lagadic | Map | Team | Publications | Demonstrations |
Irisa - Inria - Copyright 2009 Lagadic Project