Robot motion in dynamic environments: Avoiding occlusions

Contact: Éric Marchand

Creation Date December 1998

Description of the demonstration

The robots must be capable to use their sensors in order to react to the requests of their environment. The lack of reactivity is overcome here by the use of information acquired by a camera. This demonstration, carried out in real time on one of the robots IRISA, presents the results of the detection and the avoidance of the occlusion of an object. Indeed, if a mobile object tends to hide the object on which a camera is focused, the strategy of perception employed allows the camera to move in order to continue to observe the object of interest.



We seek to avoid these undesirable configurations by the definition of strategies relying on the use of the redundancy of the task function. The degrees of freedom non constrained by the visual task are used to avoid the occlusion. This choice involves the search for a secondary task function, adapted to our objective. In this case, the cost function to be minimized must be such that it reaches its maximum value when a occultation will occur (i.e. when an object approaches the target in the image) and null when the risk of occlusion is negligible. The minimization of the cost function is considered under the constraint that the visual task is achieved.

Applications

Scientific context

This is part of the research in active vision carried out in the VISTA project. The active vision takes its origin from an attempt at simulating the visual systems. It tries to recreate its adaptability. The objective is to solve the problems raised during the design of robot vision systems such as their sensitivity to noise, their weak precision and especially their lack of reactivity. Our work aims to the definition of methods able to cope with the absence of planification resulting from the on line computation of the control law in visual servoing. The same methodology was used to avoid the joint limits and the internal singularities of a manipulator, to take into account constraints on the field of view of the camera and to avoid of obstacles.

Collaborations

This work was completed in collaboration with Gregory D. Hager of the laboratory of robotics and artificial intelligence of Yale University (New Haven, CT, the USA).

References

  1. E. Marchand, G. Hager. Dynamic sensor planning in visual servoing, IEEE Int. Conf. on Robotics and Automation, Volume 3, Pages 1988-1993, Leuven, Belgium, may 1998.

| Lagadic | Map | Team | Publications | Demonstrations |
Irisa - Inria - Copyright 2009 © Lagadic Project