Virtual Visual Servoing: A framework for augmented reality

 
Contact : Eric Marchand, François Chaumette

Creation Date : July 2002



Note about this page

This page has been created in 2002 and deals with the work presented in the Eurographics'02 conference. It does not present any results dealing with the paper to be published at ISMAR'03, Tokyo, October 2003. An new demonstration dealing with these particular results will be proposed in a short time.

Overview

In this demonstration we illustrate a formulation of pose computation involving a full scale non-linear optimization: Virtual Visual Servoing (VVS). We consider the pose computation problem as similar to 2D visual servoing [Sundareswaran98]. Visual servoing or image-based camera control [Hutchinson96][Espiau92] allows to control a camera wrt. to its environment. More precisely it consists in specifying a task (mainly positioning or target tracking tasks) as the regulation in the image of a set of visual features. A set of constraints are defined in the image space. A control law that minimize the error between the current and desired position of these visual features can then be automatically built. This approach has proved to be an efficient solution to camera positioning task within the robotics context and more recently in computer graphics. Considering pose as an image-based visual servoing problem takes advantage of all the background knowledge and the results in this research area. It allows us to propose a very simple and versatile formulation of this important problem. One of the main advantages of this approach is that it allows to consider within the same process different geometrical features. We show how this framework scale easily when the camera parameters are unknown or modified.

Principle : Pose computation by virtual visual servoing.

The principle of our algorithm is to iteratively modify using a visual servoing control law the position of a virtual camera in order to register the desired features extracted from the image (in green) and the current one obtained by back-projection of the object model (in blue) for a given pose. Image (a) corresponds to the initialization while in image (d) registration has been achieved and the pose is computed. This figure illustrates the registration convergence for one image. It also illustrates the minor influence of the initialization, indeed the initial position/orientation of the camera is very different from the final computed one. In this example, straight lines are considered to compute pose.



see mpeg movie

Results

The architecture of our system is similar to those proposed in [Billinghurst01]. Since our approach is based on the visual servoing framework we rely on a library dedicated to such system and called ViSP (Visual servoing platform). This library is written in C++ and proposed both the features tracking algorithms and the pose computation and calibration algorithms. A new software component based on Open Inventor has been written to allow the insertion of virtual objects in the images. All the experiments presented in the next paragraph have been carried out on a simple PC with an Nvidia 3D board and an Imaging technology IC-Comp framegrabber.

Augmented reality using precise patterns.

In a first time we report augmented reality experiments that use precise patterns to estimate the camera viewpoint. Such experiments show that this approach may be efficiently used in interactive application such as (but not restricted to) collaborative immersive workplace, cultural heritage or architecture interactive visualization interactive video game (by moving and orienting a simple pattern the player may modify on line its perception of the game).





see mpeg movie

Augmented reality in non controlled situations.

We then consider ``real'' images acquired using a commercial recorder. In such experiments, the image processing may be very complex. Indeed extracting and tracking reliable points in real environment is a real issue. Therefore it is important to consider other image feature than simple points. We demonstrate the use of circles, lines, and cylinders. In various experiments, the features are tracked using the Moving edges algorithm at video rate.

Publications

E. Marchand, F. Chaumette. Virtual Visual Servoing: a framework for real-time augmented reality. in EUROGRAPHICS 2002 Conference Proceeding, G. Drettakis, H.-P. Seidel (eds.), Computer Graphics Forum, Volume 21(3), Saarebrücken, Germany, September 2002.

E. Marchand, F. Chaumette. A new formulation for non-linear camera calibration using virtual visual servoing. Rapport de Recherche IRISA, No 1366, Janvier 2001.

Other Related Papers

V. Sundareswaran, R. Behringer. Visual servoing-based augmented reality. In IEEE Int. Workshop on Augmented Reality, San Francisco, November 1998.

M. Billinghurst, H. Kato, I. Poupyrev. The magicbook: Moving seamlessly between reality and virtuality. IEEE Computer Graphics and Applications, 21(3):6-8, May 2001.

B. Espiau, F. Chaumette, P. Rives. A new approach to visual servoing in robotics. IEEE Trans. on Robotics and Automation, 8(3):313-326, June 1992.

S. Hutchinson, G. Hager, P. Corke. A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12(5):651-670, October 1996.

E. Marchand, N. Courty. Image-based virtual camera motion strategies. In S. Fels and P. Poulin, editors, Graphics Interface Conference, GI2000, pages 69-76, Montreal, Quebec, May 2000. Morgan Kaufmann.

| Lagadic | Map | Team | Publications | Demonstrations |
Irisa - Inria - Copyright 2009 © Lagadic Project