Visual Control of Simulated Robots in V-REP via Matlab/Simulink and ROS

Creation date : March 2014

Introduction

The Matlab/Simulink environment is a very convenient possibility for developing, debugging and testing complex control algorithms in a ''fast prototyping'' fashion. One can code algorithms either in the Matlab scripting language (thus, taking advantage of the large numerical library available), or by including external C/C++ code using, e.g., s-functions. Also, the automatic code generation feature makes it possible to speed up the execution time and even to deploy the generated binaries to other platforms (e.g., the robot itself). Finally, the use of scopes, displays, and the other visualization and matlab post-processing tools represents a great added value during the debugging and testing phase of the algorithms.

On the other hand, to the best of our knowledge, Matlab lacks an easy to use 3D physical simulation engine, which is often necessary for testing algorithms before deploying them to a real robot. One such tool is V-REP, an open-source state-of-the-art (and freely available for academic use) 3D physical simulation engine which is becoming more and more widespread in the robotics community thanks to its flexibility (possibility to simulate many different robotic platforms), dynamical engine (it supports ODE, Bullet and Vortex), and finally customizability (it offers many different possibilities to include one's own code or to interface it with the external world).

For all these reasons we propose to interface Matlab/Simulink with V-REP, to obtain an easy to use developing and testing platform for robotics applications.

Interfacing Simulink and V-REP using ROS

Custom C++ code can be intergrated in Simulink models by using the so called S-function blocks. On the other hand V-REP provides a plugin functionality for the same purpose. Simulink s-functions and V-REP plugins have a similar functioning mechanisms: they are essentially shared libraries which are loaded by the main application and contain specific "entry points", i.e. functions that are called by the main appication in certain phases of each simulation loop. This allows to exploit both in Simulink and in V-REP the Robot Operating System (ROS) middleware.

ROS is a set of open-source software libraries and tools that has become a de facto standard in the robotics community. In addition to hardware drivers, state-of-the-art algorithms, and powerful developer tools, it provides a powerful mechanism for inter-process communication: the publisher/subscriber libraries. This latter functionality can be conveninetly exploited to interface Simulink and V-REP.

The use of ROS as a communication inteface has anoter important advantage. It allows to use the very same Simulink block not only for simulation purposes, but also for controlling real robots sharing the same ROS interface. Moreover, and it the same fashion, other softwares can be introduced in the simulation+control loop by exploiting the same interface. One possible example, also shown in the simutation results at the end of this page, is the use of an additional ROS node for performing the image processing and feature extraction algorithms.

A ROS interface is already distributed with V-REP. This interface is very powerful and general and satisfies most needs. On the other hand we found it missing some specific functionalities that motivated the development of a custom ROS plugin that can be downloaded from this page.

On the other hand Simulink does not provide, at the moment, any ROS interface. Some advances in this sense have been done in the last year with the release of an official Matlab ROS toolbox. For this reason, we were forced to develop our own set of s-function blocks that allow to instantiate ROS publishers and subscribers in Simulink model.

Synchronizing Simulink and V-REP

Since we are interfacing two different softwares, each of which is simulating a dynamic system, it is very important that the simulation times of both Simulink and V-REP remain synchronized during the simulation. To this purpose we synchronize both softwares with the system real time. This strategy is also useful in the view of interfacing the Simulink with a real robot.

While in V-REP the real-time execution is natively supported, in Simulink the introduction of a custom s-function block that forces a soft real time execution constraint is required.

Simulation results

We propose three simulation examples meant to demonstrate the use of our proposed framework.

Quadrotor tracking a 3D target

In this demo the objective of the quadrotor is to track a 3D target that we can move directly with the mouse in V-REP. V-REP provides via ROS the status of the quadrotor (linear and angular position and velocity) and the position of the target which acts like a desired pose. In Simulink a quadrotor controller is implemented, it computes and sends back to V-REP the commands (torques and thrust). The plugin vrep_ros_bridge receive the commands and apply them to the quadrotor.

Quadrotor controlling its pose via a visual servoing law

In this demo we use the same components of the previous demo with the following difference: the movments of the quadrotor is define using a visual servoing law. The objective of the quadrotor is to mantain its pose at a fixed distance and orientation with respect to the target using visual informations extracted from the images generated from its camera. V-REP generate images from the camera on the quadrotor and send them back to the ROS node called visp_cam_vs. The target is a square composed by four blobs. In this node we compute the actual visual features vector $\mathbf{s} = (x_n,y_n,a_n)$ in this way: $a_n = Z^*\sqrt{\frac{a^*}{a}} , x_n = a_nx_g, y_n = a_ny_g$ where

• $a$ is the area of the object in the image
• $xg$, $yg$ its centroid coordinates
• $a^*$ the desired area
• $Z^*$ the desired depth between the camera and the target

The classical relationship that relates how the features change with rispect to the time (the derivative of s) and the (linear and angular) velocity of the camera is: $\dot{\boldsymbol{s}}= \boldsymbol{L}_v\boldsymbol{v}=\boldsymbol{L}_w\boldsymbol{w}$ where $\boldsymbol{L_v}$ and $\boldsymbol{L_w}$ are the interaction matrices (related to the translational and rotational motions).

Now we can define the visual error: $\boldsymbol{e} = \boldsymbol{s} - \boldsymbol{s}^*$ where $\boldsymbol{s^*}$ is the vector of the desired image feature. The classical IBVS control aims to ensure an exponential decrease of the error, we can define the control input as $\mathbf{v}=-(\boldsymbol{L}_v)^{-1} (\lambda \boldsymbol{e} + \boldsymbol{L}_w\boldsymbol{w})$ In this case to simplify the controll we will do some approximations:

• If the camera image plane is parallel to the target plane: $\mathbf{L}_v\approx -\mathbb{I}_3$
• If the motion of the quadrotor is smooth and slow: $\mathbf{L}_w \mathbf{w}\approx 0$

The control input (for the traslational part) becomes: $\boldsymbol{v} = \lambda \mathbf{e}$ with $\lambda > 0$ This equation does not require any estimation of the 3D parameters and can be implemented based only on the observed image features s.

The node visp_cam_vs waits for:

• The parameters of the camera (sensor_msgs::CameraInfo)
• The first image of the streaming in order to initialization the tracking.

After the main loop starts, and for each reveived image we compute the actual features following the nexts steps:

• Track blobs
• Order blobs clockwise
• Polygon creation
• Computation center of gravity and area
• Computation orientation object (in this case we need only the angle around $z$)
• Creation vector $s$

In Matlab we receive the vector s and we compute the velocity to apply to the quadrotor: $\boldsymbol{v} = \lambda (\mathbf{s} - \mathbf{s^*})$ We want to control also the rotation around the $z$ axes of the quadrotor (yaw). To do this we relate the angular velocity around $z$ with the orientation of the object (using $\theta_z$, the angle that represent the rotation of the object around the axes $z$). $\boldsymbol{w}_z = \lambda (\theta_z)$

Position Based Visual Servoing for a 6 dofs Manipulator

In this demonstration we show the execution of a position based servoing task (PVBS) involving a (simulated) 6 dof Viper 850 robot and a cubic target.

We define the servoing features as in classical PBVS schemes as the position and orientation (in the Rodrigues parametrization) of the object in the camera frame. These quantities must be regulated to a desired value. This is obtained by using the following servoing law implented in Simulink: $$\left\{ \begin{array}[lcr] {}\boldsymbol{v}_c &=& -\lambda \left( (^{c^*}\mathbf{t}_{o} - ^{c}\mathbf{t}_{o}) + \left[^{c}\mathbf{t}_{o}\right]_{\times} \theta \mathbf{u} \right)\\ \boldsymbol{\omega}_c &=& -\lambda \theta \mathbf{u} \end{array} \right.$$ where $\left(\boldsymbol{v}_c, \boldsymbol{\omega}_c\right)$ is the camera velocity in the camera frame, $\lambda$ is a positive gain, $^{c^*}\mathbf{t}_{o}$ and $^{c}\mathbf{t}_{o}$ are the desired and current position of the object in the camera frame, and finally $\left(\theta, \mathbf{u}\right)$ are the axis/angle parameters corresponding to the rotation matrix $^{c^*}\mathbf{R}_{c} = ^{c^*\!\!}\mathbf{R}_{o}{}^{c}\mathbf{R}_{o}^T$ that gives the orientation of the current camera frame w.r.t. the desired one.

The quantities $^{c}\mathbf{t}_{o}$ and ${}^{c}\mathbf{R}_{o}$ are extracted by an external ROS node using ViSP model based visual tracker. The source code for this node is part of the visp_tracker package and is available here.

Once the camera desired velocity has been computed, the joint velocity commands can be calculated as: $$\dot{\mathbf{q}} = \mathbf{J}(\mathbf{q})^\dagger \begin{bmatrix}\boldsymbol{v}_c \\ \boldsymbol{\omega}_c \end{bmatrix}$$ where $\mathbf{J}(\mathbf{q})$ is the robot Jacobian and $\dagger$ represents the pseudoinverse operation. The current joint configuration $\mathbf{q}$ is provided by V-REP.

Video of the simulations

 | Lagadic | Map | Team | Publications | Demonstrations | Irisa - Inria - Copyright 2009 Lagadic Project