Direction des Relations Européennes et Internationales (DREI)

Programme INRIA "Equipes Associées"

 

o                 I. DEFINITION

ASSOCIATED TEAM

Bird

selection

2008

 

INRIA Projects: Bunraku / IPARLA

Foreign Partner: State Key Lab CAD&CG, Zhejiang University

INRIA Research Center: Rennes / Bordeaux Sud-Ouest
Thème INRIA : Cog. D

Country : Chine

 

 

Coordinateurs français

Coordinateur étranger

Last name, first name

 Donikian, Stéphane

Guitton, Pascal

 Peng, Qunsheng

Grade/position

 CR1

Professeur

 Professeur

Affiliation
(précisez le département et/ou le laboratoire)

 IRISA/INRIA Rennes

LABRI/ INRIA Futurs

 State Key Lab CAD&CG, Zhejiang University

Mail Adress

 Campus de Beaulieu, 35042 Rennes Cedex

Université Bordeaux 1
351 Cours de la Libération
33405 Talence Cedex

 Zijingang Campus, Hangzhou, 310027, China

URL

 http://www.irisa.fr/bunraku

http://www.labri.fr

 http://www.cad.zju.edu.cn/

Phone

 02 99 84 72 57

05 40 00 69 18

 (0086)571-88206681

Fax

 02 99 84 71 71

05 40 00 66 69

 (0086) 571-88206680

E-mail

 donikian@irisa.fr

guitton@labri.fr

 peng@cad.zju.edu.cn


Abstract of the proposal

Interactions entre les mondes Réels et Virtuels / Interactions between Real and Virtual Worlds

 

The main purpose of this collaboration is to provide new tools for managing the interaction between real and virtual worlds. We first want to ease the interaction between real users and virtual worlds during modelling and collaborative tasks. Concerning generation of virtual worlds, we will focus not on fully automatic solutions, but on semi-automatic ones that will take into account human decisions. This integration of the user is essential to provide intuitive processes and better immersion. Based on the different interfaces between virtual and real world (from a simple stylus to a set of cameras), we have to capture accurately the motions, the gestures, and to interpret the intentions of humans, in order to correctly integrate these actions and intentions. Motion interpretation is also crucial for collaborative tasks between real and virtual humans, because understanding of human's intention is required to provide correct responses. For modelling, this would result in more intuitive solutions, since they will be based on natural ability. For animation, this would result in an increased integration of virtual and real world, a real-time edition and control of virtual human. Understanding the content of a representation of the real world, such as given by a video, is then required to augment real scenes with a dynamic virtual content. To achieve this goal, we want to work on the realism of virtual humans, the coherency between the acquired geometry and motion of the real world and the virtual one, the close integration of the two worlds during the rendering, the accuracy of the modelling and editing process.

 

 

Presentation of the Associated Team

 

o                 1.Presentation of the foreign coordinator

Qunsheng Peng is born in 1947. He got undergraduate degree from Dept. of Automation at Beijing Mechanical College in 1970, and master degree from Dept. of Manufacturing Engineering at the Beijing University of Aeronautics and Astronautics in 1980. He obtained his PhD in Computer Science from University of East Anglia in 1983. He was successively lecturer (1984-1986) and associate professor (1986-1988) in the dept. of Applied Mathematics at the Zhejiang Univeristy, and became professor in 1988 at the State Key Lab CAD&CG of Zhejiang University. He was joint-director (1988-1997) and director (1997-2002) of the State Key Lab CAD&CG. He is acutally Vice President of the Lab Academic Committee. He received several honors and awards, among them, the Computer Graphics Achievement Award at Chinagraph’2000 (annual conference on Compuetr Graphics in China). He also serves on editorial board of several journals: J. of Computing Science & Technology (since 1991), J. of CAD & CG (in Chinese) since 1996, Chinese J. of Computers since 1997, Chinese J. of Software since 1999, J. of Zhejiang University (Science) since 2000 and finally The Visual Computer since 2002.

Dr. Peng's research interests include computer simulation and animation, scientific data visualization, realistic image synthesis, geometric modeling, etc. In the past years, he published more than 200 papers concerned with shading models, real time rendering, curved surface modeling, and infrared image synthesis on academic journals and conferences.

Links to involved professors resumes:

o                   Qunsheng Peng

o                   Weidong Geng

o                   Xueying Qin

o                   Hongxin Zhang

 

o                 2. Collaboration History

2.1. Between Bunraku, Iparla and CAD & CG Lab

CAD & CG State Key Lab and BUNRAKU (formerly SIAMES) and IPARLA project-teams were partners in the STIC-Asie project on Virtual Reality (2004-2006), supported by CNRS, INRIA and French Ministry of Foreign Affairs. Project objectives were to develop a research network on Virtual Reality including collaborations between Asian and French teams. The project was an opportunity of mutual discovery between the different teams. Pr. Qunsheng Peng and Anatole Lécuyer attended the second workshop of the STIC-Asie project together in Strasbourg in december 2005. Pr. Zhigeng Pan and Stéphane Donikian attended the third workshop together in Tokyo in 2006. These first exchanges were reinforced during other conferences in the field of computer animation, where researchers from both teams met each other: CASA 2005, CASA 2006, SCA 2006 and CGI 2006 in Hanghzou, China where the CAD & CG lab is. During their stay in Hangzhou for attending CGI, Anatole Lécuyer and Franck Multon visited the State Key Lab and could exchange directly with several researchers and professors.

In parallel, Stéphane Donikian and Pascal Guitton did a study trip in November 2005 on behalf of the French Ministry of Foreign Affairs, in order to visit Computer Graphics laboratories in the vicinity of Shanghaï; one of the visited team was the CAD & CG Lab. In continuation of this visit, Stéphane Donikian and Pascal Guitton on the French side, and Pr. Q. Peng (State Key Lab) and X. Denshen (NUST) on the Chinese side, organized a sino-french seminar in june 2006, with the support of INRIA and French Consulate in Shanghai (http://www.irisa.fr/prive/donikian/SFS06) in order to extend scientific relationships between the two countries. Following this seminar, new collaborations were initiated between French and Chinese teams.

On one hand Pascal Guitton, Patric Reuter and Xavier Granier have done a visit from the 17th to the 24th of March 2007, with the French Consulate support, in order to precise the scientific contents of a possible collaboration. The 3D modeling and Non-PhotoRealistic Rendering topics, together with their adaptation to mobile devices raised a great interest. A common paper is in prepation on Intuitive modeling by Sketching and will be finalized with visit of Dr. Xavier Granier at the end of October 2007 and the stay of Dr. Hongxin Zhang at the end of November 2007.

On the other hand, Stéphane Donikian obtained from the French Consulate in Shanghaï funding for several visits. First, Franck Multon  and Stéphane Donikian visited the State Key Lab from 9th to 16th of October 2006 in order to formalise common research topics between the teams. Then, Pr. Peng visited IRISA during one month from 20th November to 19th of December 2006. Nicolas Pronost defended his PhD on the 7th of December 2006 with Pr. Peng participating his jury, and stayed one month in Hangzhou in December 2006 for implementing collaboration on motion editing. Franck Multon and Julien Pettré stayed three weeks in Hangzhou in may 2007 in order to continue existing works on motion editing and extend the collaboration to new topics with Dr. X. Qin on Augmented Reality. Dr. Weidong Geng and one of his PhD sutend Q. Li came to IRISA for a one month stay in June 2007. During their stay, they worked in collaboration with N. Pronost and F. Multon in order to merge their motion retrieval algorithms and motion adaptation methods. They also prepared a demo for publication in IEEE VR (see Section II.1.3) . Finally, Dr. Qin and one of her PhD student stayed in Rennes at IRISA for one month in October 2007 in order to start implementing common works augmented video (see Section II.3.4).

Two publications illustrate our first results in the context of our collaboration:

[Pronost07] N. Pronost, F. Multon, Q. Li, W. Geng, R. Kulpa, G. Dumont. Techniques d’animation pour gérer les interactions entre un combattant virtuel et un sujet réel. Proceedings of the congress of the French Association for Virtual Reality AFRV, October 2007, Marseille, France

[Pronost08] N. Pronost, F. Multon, Q. Li, W. Geng, R. Kulpa, G. Dumont. Interactive animation of autonomous characters: application to virtual kung-fu fighting. submitted to IEEE-VR 2008, 2008

2.2. between INRIA and CAD & CG Lab

No other current collaboration exists between INRIA and the State Key Lab CAD&CG of Zhejiang University.

DREI link about China

 

o                 3. Impact

3.1. on existing collaboration

Funding from the French Consulate in Shanghai allowed us to start a collaboration with the CAD & CG lab. First exchanges are currently raised several complementary research topics, and potential publications. An associated team will allow the development of longer term research projects.

3.2. on collaboration with other INRIA teams

The proposed associated team is already composed of two INRIA projects from two different centers, and will naturally result in closer collaborations. Furthermore, with this associated tem, INRIA can extend its collaboration with China (e.g., the LIAMA) to the strongest Lab in computer graphics in China. Moreover, CAD & CG lab research on Augmented Reality can also be interesting for other INRIA teams such as LAGADIC.

3.3. on other collaborations

The IPARLA and Bunraku projects were partners on a proposal for a CNRS-JST collaboration program with the university of Tokyo and OLM digital in Japan. The proposed associated team and the JST-CNRS will have a direct impact on each others, on such topics as Sketching, with the presence of Dr Takeo Igarashi at the university of Tokyo, Non-PhotoRealistic Rendering with OLM digital, Intelligent behaviour with Keio University. Interaction with Virtual Worlds will be tackled with Tokyo Institute of Technology, University of Tokyo and Osaka University by physically constraining the motion of the hand thanks to haptic or pseudo-haptic techniques.

o                 4. Others

The CAD & CG State Key Lab is one of the ten best State Key Lab in China, and the Zhejiang University in Hangzhou is in third rank among Chinese Universities. They can recruit excellent students as PhD candidates - more than one hundred Master and PhD students are advised by State Key Lab professors - and conduct high-quality research works. Professor Peng obtained the first SIGGRAPH chinese paper and last year, two of their papers were accepted at SIGGRAPH and two others at Eurographics (this year, one to SIGGRAPH and one to Eurographics). They have on going collaborations with the Fraunhofer Institute in Darmstadt, and the Polytechnic University of Hong Kong. They are strongly willing to establish a long-term collaboration with french teams and would see a french PhD getting a research position in their laboratory in the future.



o                 II. 2008 Work plan

 

o                 1. Motion Editing

Motion capture is now widely used for animating virtual humans. However, processing motion capture data (generally based on Cartesian position of external markers) in order to obtain plausible animations for virtual humans is very complex and rises many problems. One of the main issues is to adapt the motion of the actor to the anthropometric sizes of the virtual human because they are generally different. This “motion retargeting” process has been widely studied in computer animation by solving geometric constraints such as placing the feet over the ground without sliding [Gleicher98]. More generally, reusing motion capture data involves adapting the trajectories in order to take specific constraints into account, such as touching an object or adapting the motion to non-flat grounds. Both motion regargetting and this latter adaptation deal are generally performed thanks to solving geometric constraints.

In some cases, dynamics is also an important issue for adapting motion to complex situations. For example, adapting a gait to external perturbations (such as pushes or a slope terrain) implies dealing with physics. It has mainly been performed to calculate motions after a hit or a punch occurs [Zordan05]. A passive simulation is calculated thanks to a dynamic model to calculate an immediate reaction to external perturbations. Then the system searches a database for a motion that is compatible with the current state of the system (such as receiving a hit from the right side).

All these processes require a lot of computation time and generally require a complete knowledge of the constraints in advance. Indeed, solving constraints at given times rises the problem of obtaining continuous motions, leading to iterative methods. As a consequence, these methods are not suitable for real-time animation and don’t allow real-time interactions between virtual and real humans. A solution consists in precomputing several situations and to store the resulting file in databases. Querying motions into this database can be performed in real-time.

Motion graphs [Kovar02] have been introduced to organize motion capture data into graphs which nodes are poses and links deal with all the possible transitions. Hence, two poses are linked if they are close one to each other.

Several works have been carried-out to use such motion graphs in order to deal with various situations such as displacing a virtual boxer that has to punch a target specified by the user [Lee04]. However, taking various targets into account leads to huge databases composed of thousands of motion clips. Precomputation is very long and relies on data that are adapted to one given skeleton and limited to the recorded targets.

Several recent works have proposed to search a database for a motion that satisfies a set of constraints or descriptions. For example, motion templates were defined to associate a synthetic description with a clip [Muller05]. Then it’s possible to query the database for motions that best correspond to a given description [Muller06]. For example, a user can query motions that involve a fast motion of the arm in the forward direction. The resulting subset of motions may contain throws and punches that both correspond to this description. However, the resulting motions should be adapted in order to accurately satisfy the requirements of the animation: size of the skeleton and constraints imposed to some body parts.

The main challenge here is to associate the above motion retrieval approaches to algorithms capable of modifying the resulting sequence to accurately deal with various situations. This is the main topic addressed in this proposal.

1.1 INRIA Know-how and justification of the collaboration with State Key Lab

INRIA has a good experience in motion retargeting and adaptation to kinematic constraints. Firstly, we have defined a morphological independent representation of motion that allows retargeting clips very rapidly to new characters [Kulpa05a]. Instead of storing joint angles, this representation is based on mixing Cartesian and orientation data. These data are divided by the actor’s anthropometrics sizes leading to adimensional data that can easily be scaled to the dimension of new characters. Based on this representation, we have also designed algorithms to solve kinematic and kinetic (control of the center of mass position) constraints in real-time for hundreds of characters at 30Hz on a common computer [Kulpa05b].

In real-time animation, characters generally don’t use a unique motion and have to compose several different actions, such as displacing, grasping, kicking, manipulating tools… As a consequence, an animation framework should be able to blend several motions together. We have proposed a new method to synchronize motions [Menardais04a] and to blend them in a real-time framework. The user just has to specify the weight associated to each motion at each time step and to let the system recalculate these weights to ensure coherence between the motions [Menardais04b].
All the above works have been gathered into a common animation engine named MKM for “Management of Kinematic Motions” (
www.irisa.fr/siames/MKM) that has been evaluated and used in several companies in video games and for animating workers in virtual plants [Multon07]. However, the link between the behavioral model and MKM has generally to be totally redesigned depending on the applications. One of the problems is the automatic selection of motions in the database before MKM tries to blend them and adapt the result to the situation. Motion retrieval has been explored in the State Key Lab CAD&CG of Zheijiang University.

 Concurrently to these methods based on kinematic constraints solving, we have also developed a biomechanical model of human body and a method in order to verify if a modified motion is physically valid or not [Pronost06T]. The problem is to guide the modification applied to the motion to verify the physical laws.

1.2 State Key Lab CAD&CG Know-how and justification of the collaboration with INRIA

In order to make motion capture widely available, the motion capture data needs to be made reusable. This means that we may create the needed motions by reusing pre-recorded motion capture data [Geng03]. Furthermore, with the increased availability of motion capture data and motion editing techniques, it currently yields a trend to create the qualified motion by piecing together example motions from a database [Kovar 02]. This alternative approach potentially provides a relative cheap and time-saving approach to quickly obtain high quality motion data to animate their creatures/characters.

Motion database is the basis for motion reuse. The major weakness in motion capture data is that it lacks of structure and adaptability.
A typical strategy of motion data organization is based on the directed graph. Rose et al employ “verb graphs”, in which the nodes represent the verbs and the arcs represent transitions between verbs [Rose98]. The verb graph, acts as the glue to assemble verbs (defined as a set of example motions) and their adverbs into a runtime data structure for seamless transition from verb to verb for the simulated figures within an interactive runtime system. Arikan & Forsyth also present a similar framework that generates human motions by cutting and pasting motion capture data [Arikan02]. The collection of motion sequence could be represented as a directed graph. Each frame would be a node. There would be an edge from every frame to every frame that could follow it in an acceptable splice. They further collapse all the nodes (frames) belonging to the same motion sequence together. Kovar et al construct a directed graph called a motion graph that encapsulates connections among the database [Kovar02]. The motion graph is a directed graph wherein edges contain either pieces of original motion data or automatically generated transitions. All edges correspond to clips of motion. Nodes serve as choice points connecting these clips. i.e., each outgoing edge is potentially the successor to any incoming edge. New motion can be generated simply by building walks on the graph.
Yu et al, in the State Key Lab implemented a framework which allows the user to retrieve motions via Labanotation [Yu05]. For each motion clip in the library we generate a corresponding Labanotation sequence as additional motion property, as shown in Figure 1. A similarity metric for Labanotation sequences is used to search the motions that have similar Laban descriptions. It is able to retrieve motion segments that only match part of the query Laban sequence. Then based on dynamic programming these segments are stitched together to form a smooth output motion that is in an optimal sense of matching query Laban sequence.


figure 1.1
Figure 1.1: Example of editing based on motion retrieval.
Upper part of the figure is the query Laban sequence and its corresponding motion.
Lower part is the resultant matched motion clip from motion retrieval and its corresponding Laban sequence.

Sketch-drawings is an intuitive and comprehensive means of conveying movement ideas in character animation. Davis et al provides a simple sketching interface for articulated figure animation. The user draws the skeleton on top of the 2D sketch, and then the system constructs the set of poses that exactly match the drawing, It also allows the user to guide the system to the desired character pose [Davis03]. Thorne et al focused on the high-level motions, and developed cursive motion notations that can be used to draw motions. A desired motion is created for the character by sketching a gesture such as continuous sequence of lines, arcs, and loops [Thorne04]. Li et al, in State Key Lab, proposed a novel sketch-based approach to assisting the authoring and choreographing of Kungfu motions at the early stage of animation creation [Li06]. Given two human figure sketches corresponding to the initial and closing posture of a Kungfu form, and the trajectory drawings on specific moving joints, MotionMaster can directly rapidprototype the realistic 3D motion sequence by sketch-based motion retrieval and refinement based on a motion database, as shown in Figure 2. The animators can then preview and evaluate the recovered motion sequence from any viewing angles. After the 3D motion sequence has been associated with the 2D sketch drawing, the animator can also interactively and iteratively make changes on the 2D sketch drawing, and the system will automatically transfer the 2D changes to the 3D motion data of current interests. It greatly helps the animator focus on the movement idea development during the evolutionary process of building motion data for articulated characters.


figure 1.2
Figure 1.2: Multiple resulting motion segments matched to the input 2D sketches.

Motion retrieval leads to motion that best correspond to the desired situation but doesn’t guarantee to exactly fit the constraints. For example, the skeleton of the character should be the same than the one of the actor to ensure that the choice and the animation are correct. The same way, the selected motion should be corrected in order to accurately reach a target in space with a given body part. These limitations could be overcame by coupling motion retrieval with Bunraku’s work on motion adaptation.

1.3 Common results and projects

As described above, the approaches developed in Bunraku and State Key Lab are complementary. On the one hand, Bunraku has developed methods to synchronize, blend and adapt motions according to the orders provided by a user. However, a controller is missing to automatically select the most convenient motions before they are blended and adapted accurately to the situation. On the other hand, the State Key Lab proposes methods to organize motion capture data and to retrieve the clip that best fits the current situation. However it requires huge databases to deal with numerous different constraints such as reaching points with body parts.
We thus have decided to associate the two approaches in order to give more autonomy to virtual characters. Hence, according to a given task that the virtual human has to achieve, the method should select and adapt the best clips that are supposed to solve the problem. A challenge is to use as less motion as possible in order to lower computation time used for searching the database and to decrease the size of the database in memory. This method should also be compatible with interactive applications where a virtual character is supposed to react immediately to orders provided unpredictably by users at any time.

In 2007, we have associated the two methods, as described above, to solve the problem of making virtual humans interact with real subjects in virtual reality. The selected application was based on a fight between a real human whose motions were captured in real-time and a virtual kung-fu master. The latter can be associated with several different geometric models, with different anthropometric sizes. A supervisor (human being) is asking the virtual human to kick or punch the real subject. The subject is free to move in the virtual environment so that the kung-fu master has to select the best motions to follow and strike him, as shown in Figure 3.

figure 1.3
Figure 3 : example of the fight between a real subject and a virtual kung-fu master.

A small database composed with less than 20 motions is used to animate the kung-fu master. The database is organized in order to facilitate the motion retrieval process. Each motion is associated with some data, such as semantics (punch, kick or displacement). The database is organized as clusters to accelerate the search algorithm. During this search in the database, the current pose of the kung-fu master is retargeted to the actor’s skeleton (the one that performs each motion of the database). Indeed, punching a character placed 1.5m far leads to different motions if the character is small or tall. This problem has to be considered during motion retrieval. It has been achieved by using the motion retargeting algorithm developed in MKM.

Once a motion is selected, it has to be adapted to the accurate position of the target and to the current pose of the kung-fu master. This task is performed by MKM. This work has been submitted to IEEE-VR, which is the most important conference for virtual reality. Review decisions are expected to be announced November 5, 2007. We will also make a communication in French in the French Association for Virtual Reality (AFRV) at the end of October (with a full paper printed in the proceedings).

In the near future, we will continue to associate the two methods in three main directions.

  1. The first one deals with designing high-level interfaces for commanding virtual humans. The main challenge here is to connect the current work to high-level behavioural model or to motion planning algorithms. This work could be associated to the other project about augmented reality (see xxx). Hence, motion planning will be developed in order drive virtual humans in a real video. Motion planning will generate some orders that the animation engine is supposed to obey by selecting and modifying the most convenient motion.
  2. The second one consists in retrieving and modifying a motion (in a large database) that best corresponds to signals obtained with few accelerometers (such as using a Wii mote, product of Nintendo) and a video. These signals are captured on a subject that performs gestures in order to animate an avatar or to interact with virtual humans. This project is leaded by the State Key Lab but requires mixing motion retrieval and adaptation, as described in this section.
  3. The third one deals with improving the physical correctness of the animations. Nicolas Pronost that worked on dynamic simulation during his PhD will visit the State Key Lab for one year. He will work on defining a biomechanical model of high-jumper.

1.4 State of the art References

  • [Davis03] Davis J., Agrawala M., Chuang E., Popovic Z., Salesin D.: A sketching interfacefor articulated figure animation. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation (2003), pp. 320–328.
  • [Gleicher 98] M. Gleicher. Avatar behavior from human motion data. In SCA'04 : Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer Animation, pages 79–87, Grenoble, France, 2004. ACM Press.
  • [Rose98] Rose, C., Cohen, M. F., Brodenheimer, B.: Verbs and Adverbs: Multidimensional Motion Interpolation, IEEE CG&A, 18 (5), (1998)32-38
  • [Thorne04] Thorne M., Burke D., Panne V. D.: Motion doodles: An interface for sketching character motion. In Proceedings of SIGGRAPH'04 (2004), pp. 424–431.
  • [Zordan 05] V. B. Zordan, A. Majkowska, B. Chiu & M. Fast. Dynamic response for motion capture animation. ACM Transactions on Graphics: Proceedings of Siggraph'05, vol. 24, no. 3, pages 97–701, 2005.

1.5 References of the partners

  • [Héloir06]              A. Héloir, N. Courty, S. Gibet, F. Multon (2006) Temporal alignment of expressive gestures. Journal of Visualization and Computer Animation, 17(3+4): 347-358
  • [Kulpa05a]             R. Kulpa, F. Multon, B. Arnaldi (2005). Morphology-independent representation of motions for interactive human-like animation. Computer Graphics Forum 24(3): 343-352
  • [Kulpa05b]            R. Kulpa, F. Multon (2005) Fast inverse kinematics and kinetics solver for human-like figures. Proceedings of IEEE Humanoids'2005, p. 38-43, Tsukuba, Japon.
  • [Kulpa05T]            R. Kulpa (2005) Adaptation interactive et performante des mouvements d'humanoïdes synthétiques : aspects cinématique, cinétique et dynamique. Doctorat de l'INSA de Rennes, novembre 2005.
  • [Ménardais04a]      S. Ménardais, R. Kulpa, F. Multon (2004) Synchronization of interactively adapted motions. Proceedings of ACM, SIGGRAPH/EUROGRAPHICS Symposium of Computer Animation, p. 325-335, Grenoble, août 2004.
  • [Ménardais04b]      S. Ménardais, F. Multon, R. Kulpa, B. Arnaldi (2004) Motion blending for real-time animation while accounting for the environment. Proceedings IEEE Computer Graphics International, p 156-159, Crète, juin 2004.
  • [Ménardais03T]     S. Ménardais (2003) Fusion et Adaptation temps réel de mouvements acquis pour l'animation d'humanoïdes synthétiques, doctorat de l'Université de Rennes 1, janvier 2003.
  • [Multon 07]           F. Multon. R. Kulpa and B. Bideau.MKM: a global framework for animating humans in virtual reality applications. Presence, to appear, 2007
  • [Pronost06]            N. Pronost, G. Dumont, G. Berillon, G. Nicolas (2006) Morphological and stance interpolation in database for simulation of bipedalism of virtual humans. Visual Computer, 22(1):4-13, janvier 2006
  • [Pronost06T]          N. Pronost (2006), Définition et réalisation d'outils de modélisation et de calcul de mouvement pour des humanoïdes virtuels. Ph.D thesis of University of Rennes 1, France, December 2006
  • [Li et al 06] Qilei L. Li, Weidong D. Geng, Tao Yu, Xiao Jie Shen, Newman Lau, and Gino Yu MotionMaster: Authoring and Choreographing Kung-fu Motions by Sketch Drawings,ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2006 pp. 233-241
  • [Shen et al 05] Xiaojie Shen   Qilei Li   Tao Yu   Weidong Geng   Newman Lau  Mocap data editing via movement notations, The ninth international conference on Computer Aided Design and Computer Graphics, 2005.  Hong Kong
  • [Yu et al 05] Tao Yu ,Xiaojie Shen ,Qilei Li and Weidong Geng,Motion retrieval based on movement notation language,Computer Animation and Virtual Worlds  Volume 16 ,  Issue 3-4  (July 2005) ,CASA 2005 273 - 282 ,2005,ISSN:1546-4261
  • [Geng & Yu 03] GENG W., YU G.: Reuse of motion capture data in animation: a review. In Lecture notes in computer science 2669 (2003), pp. 620–629.
  • [Geng et al 04] Weidong Geng., Yan Huang, Yunhe Pan: Step/stance planning and hit-point repositioning in martial arts choreography.Proceedings of 17th International Conference on Computer Animation & Social Agents, 2004, pp. 95-102
  • [Geng et al 03] Geng, W.-D. Lai, C.-S. Yu, G.: Design of Kungfu library for 3D game development," The 2nd International Conference on Application and Development of Computer Games, Hongkong, (2003) 138-141
  • [Geng et al 03b] Geng, W.-D. Chan, M., Lai, C.-S., Yu G. Implementation of runtime motion adjustment in game development, The 2nd International Conference on Application and Development of Computer Games, Hongkong, (2003)142-147
  • [Pronost07] N. Pronost, F. Multon, Q. Li, W. Geng, R. Kulpa, G. Dumont. Techniques d’animation pour gérer les interactions entre un combattant virtuel et un sujet réel. Proceedings of the congress of the French Association for Virtual Reality AFRV, October 2007, Marseille, France
  • [Pronost08] N. Pronost, F. Multon, Q. Li, W. Geng, R. Kulpa, G. Dumont. Interactive animation of autonomous characters: application to virtual kung-fu fighting. submitted to IEEE-VR 2008, 2008

 

o                 2. Intuitive but precise 3D modeling

In order to create simpler interfaces, new approaches for 3D modeling have been developed, based on the human ability to quickly draw a global overview of an object. These approaches are commonly referred as 3D Sketching. Their principle is to infer the shape of a 3D model and add details thanks to different editing operations (e.g., cutting, extrusion), all based on sketched 2D curves.

Teddy [Igarashi et al. 99], as a precursor of 3D freeform modeling, has introduced a gesture grammar to convert drawn curves into corresponding modeling operations accessible to non-expert users. Both the interactions and the geometric models have been improved (e.g., [Karpenko et al. 02, Tai et al. 04, Schmidt et al. 05]), but unfortunately, changing the global shape of objects may be a challenging task.

For a flexible approach, new interfaces and interactions [Levet et al. 07], new representation of models [Tai et al. 04,Levet and Granier 07] have to be developed. In this context, the partners are currently working on three research axis (from the shorter to the longuer term research):

  1. Currently, both the main limitation and main advantage of sketching systems, is the exclusive used of 2D interactions based on drawing 2D curves. This can result in some difficulties for the relative 3D positioning of different objects, and thus to a limited accuracy. We want to explore new interaction devices, such as the CAT [Hachet et al. 03] and new interfaces [Levet et al. 07] in order to remove this limitation while preserving the intuitive sketching approach.
  2. When the 3D modeling can be restricted of a specific context, (like virtual human, or car-modeling [Thorn et al. 04,Yang et al. 05]), some specific, more robust and accurate process can be used, based on a generic templated representations of such models. We are currently investigating this approach in the context of mobile modeling on limited resources devices such as cell-phones and PDAs.
  3. More generally, we want to increase the accuracy of 3D modeling systems based on sketching. We have thus to investigate some classical drawing metaphores like shading painting [Kerautret et al. 05] and transpose these approaches into an interactions systems for inferring the 3D models from theses 2D informations. Such an approach would offer an efficient and precise modeling tool in the context of devices with limited interactions.

Based on the same assumptions, the partners whish to also extend the modeling to the realistic appearance and expressive shading design, taking into account the cultural differences and similarities in order to provide more adapted processes.

2.1 INRIA and State Key Lab CAD & CG Know-how

The partners have developed in parallel some experience on Sketching for free-form modeling, in a complementary way.

In one side, the State Key Lab of CAD & CG has achieved some work on the 3D representation using convolution surfaces [Tai et al. 04]. Such an approach are very-well suited surfaces with nice geometric properties, but is limited in term of possible objects that can be generated.

For the INRIA projects, their experience spans in the development of new interaction tools [Kerautret et al. 05,Levet et al. 07]. Such approaches increase the range of possible generated surfaces, but have still some problems in the geometric quality [Levet and Granier 07].

By combining the different knowledges, we are working on proving a more robust but still extended sketching approach.

2.2 State of the art References

  • [Igarashi et al. 99]  T. Igarashi, S. Matsuoka and H. Tanaka: Teddy: a sketching interface for 3D freeform design, ACM SIGGRAPH (1999)
  • [Karpenko et al. 02]  O. Karpenko, J. Hughes and R. Raskar: Free-form Sketching with Variational Implicit Surfaces, Proc. of Eurographics (2002)
  • [Thorne et al. 04]  M. Thorne, D. Burke and M. van de Panne: Motion Doodles: An Interface for Sketching Character Motion, ACM Transactions on Graphics (2004), 23(3)
  • [Schmidt et al. 05]  R. Schmidt, B. Wyvill, M.C. Sousa and J.A. Jorge: ShapeShop: Sketch-Based Solid Modeling with BlobTrees, Proc. Eurographics Workshop on Sketch-Based Interfaces (2005)
  • [Yang et al. 05]  C. Yang, D. Sharon and M. van de Panne: Sketch-based Modeling of Parameterized Objects, Proc. Eurographics Workshop on Sketch-Based Interfaces (2005)

2.3 References of the partners

  • [Hachet et al. 03] M. Hachet, P. Guitton, P. Reuter and F. Tyndiuk: The CAT for Efficient 2D and 3D Interaction as an Alternative to Mouse Adaptations, Proc. of Virtual Reality Software and Technology, (VRST 2003), best paper award
  • [Tai et al. 04]  C.-L. Tai, H. Zhang and C.-K. Fong: Prototype Modeling from Sketched Silhouettes based on Convolution Surfaces, Computer Graphics Forum (2004), 23(1)
  • [Kerautret et al. 05]  B. Kerautret, X. Granier and A. Braquelaire: Intuitive Shape Modeling by Shading Design, Proc. of Smart Graphics (2005)
  • [Levet and Granier 07]  F. Levet and X. Granier: Improved Skeleton Extraction and Surface Generation for Sketch-based Modeling, GI (Graphics Interface) (2007)
  • [Levet et al. 07] F. Levet, X. Granier and C. Schlick: Multi-View Sketch-based FreeForm Modeling, Proc. of Smart Graphics (2007)

o                 3. Integrating Navigating Virtual Humans in Natural Scenes

3.1 Problem statement:

3.1.1 Augmented Video

Over the past decade, Augmented Reality (AR) [Azuma et al. 2001], which aims to merge virtual objects into the real scenes, has become an invaluable technique for a wide variety of applications. Augmented Video [Zhang et al. 2006] is an off-line AR technique for highly demanding applications such as film-making, television and environmental assessments, in which seamless composition is of essential importance. Seamless composition need for geometrical, space-temporal and colorimetric coherency between virtual and real objects. Geometrical coherency is ensured by recovering the camera parameters (trajectory, focal length) from the video sequences, and then the 3D model of the scenes is possible to be reconstructed by dense depth maps. After this, we render virtual objects shadows and account for occlusions while considering high quality illumination effects of outdoor scenes.

3.1.2 Virtual Humans’ navigation

Virtual humans’ navigation is first considered as a motion planning problem. Motion planning techniques and representations of 3D environments are intensively studied in the Robotics field [Latombe 1991]. Two main classes of approaches can be distinguished: first, the roadmap-based approaches and second, the cell-decomposition-based approaches. Roadmap-based approaches capture the connectivity of the free space of a given environment into a network of paths. Paths are ensured to be collision-free with the obstacles of a scene, and feasible according to the mechanical constraints of the considered system. Several techniques allow to compute such a roadmap [Arikan, Chenney et al. 2001,Thomas and Donikian 2000, Bayazit, Lien et al. 2002, Choi et al. 2003, Pettré et al. 2003]. Roadmap-based techniques provide an implicit representation of obstacles and may provoke unrealistic results due to the lack of explicit representation of obstacles (shape and distance). Cell-decomposition based approaches model the environment as a set of interconnected areas. In opposition to roadmap based approaches, solutions to query are series of collision free areas instead of collision free paths. The provided solution thus contains more practical information, such as the distance to obstacles and the available surrounding free space, easing and improving performances of reactive navigation processes which account for surrounding dynamic obstacles. Two main cell-decomposition techniques are to be distinguished: approximate decomposition [Kuffner 1998, Tecchia and Chrysanthou 2000, Shao and Terzopoulos 2005, Pettré et al. 2006, Bandi and Thalmann 1998], and exact decomposition [Kallmann et al. 2003, Lamarche 2004].

 

While path planning techniques provide a global solution-path leading to a goal, dynamic obstacles along the path are taken into account using Reactive Navigation techniques. Reactive Navigation process may rely on particle systems [Helbing 2000], rule based systems [Reynolds 1987], or be predictive [Paris 2007]. Reactive Navigation is a crucial for achieving realistic navigation, as well as taking into account psychological factors in the decision process [Wiener et al 2003].

3.1.3 Integrating Virtual Humans in Natural Scenes

A short-term objective of the collaboration between Bunraku and CAD&CG is to develop techniques for integrating virtual humans into real video scenes. Each team developed complementary expertise to address this problem. Our first goal is to consider the problem as a Computer Animation one. We want to provide the animation designer a tool for populating a video scene with virtual humans, by controlling their goals, path and timing of their locomotion. When dealing with a high number of entities, it is not conceivable to define the motion of each single virtual human. We must then provide the designer with high-level control, while taking into account the need for interactivity. To reach this goal, a number of problems have to be addressed:

  • from an initial video sequence, geometrical context of the scene must be extracted: camera’s trajectory, 3D model of static obstacles in the scene and trajectory of moving obstacles,
  • we must then structure the resulting data to provide a framework for the design process. This preliminary step will improve the performance of the following ones by pre-processing the available information; in particular, we apply cell decomposition techniques to capture the geometry and the topology of the navigable space,
  • using finally motion planning techniques, we want to provide the user with high level control on virtual humans’ motion. We want virtual humans to reach user-specified destinations, with respect to the presence of static and moving obstacles. We want the user to be able to modify the proposed trajectory in order to fit specific needs (scenario-guided action). We also want the method to work at interactive rates to let the user evaluate the result immediately and process by trial and error.

From the CAD&CG lab point of view, the problem is to compute a 3D representation of the real world from a video sequence to enable virtual object integration. From the Bunraku point of view, the problem is to exploit this representation in order to enable interaction between virtual and real objects in the final scene.

3.2 INRIA Know-how

Figure 3.1 - A Virtual City populated with virtual inhabitants (SOURCE)

The Bunraku team is key-actor in the field of virtual human simulation. We acquired expertise on several topics related to the collaboration objectives:

  • Cell-decompositions techniques for humans and crowds navigation [Lamarche and Donikian 2004, Pettré et al. 2006],
  • Realistic human path planning with multi-criteria decision process [Paris, Donikian and al. 2005],
  • Realistic Reactive Navigation based on experimental data analysis [Paris et al. 2007]

In the context of our collaboration with the State Key Lab of CAD&CG, we want to address the problem of designing interactively the motion of virtual humans in real scenes. Complexity of the motion planning problem needs addressing: we benefit our experience in crowd simulation. Figure 3.1 illustrates our previous works on crowd design, simulation and rendering for Computer Animation purposes. Using cell-decomposition techniques, we were able to populate large-scale environments with virtual humans from simple high-level directives.

3.3 State-Key Lab of CAD & CG Know-how

Figure 3.2 - Some images from a real video sequence and the extracted 3D model

CAD&CG Lab team has strong background on augmented video and reality. We have solved related problems about high quality augmented reality:

  • By vision-based techniques, we obtain robust and efficient solution of camera trajectory for very long video sequences with large-range varying of focal length [CVPR, Zhang and al., 2007]
  • Based on precise camera trajectory, 3D models can be reconstructed by dense depth maps [CASA, Zhang and al., 2006] (see Figure 3.2).
  • With both the camera trajectory and the 3D models, the virtual objects and video sequences can be integrated into video sequences with shadows, occlusions, and interactive effects [Qin et al. 02].

The key of the interaction of the real scenes to virtual mass is the sensitivity of the motions of real scenes, such as human beings and cars, therefore the virtual mass can react correctly. CAD&CG lab has tracked moving cars in a video sequences and then integrate a virtual car into the video sequences with the same pose, which demonstrate high quality of tracking ability about cars, and it has been successful to track a pedestrian in a video sequence. CAD&CG lab also developed method to segment video sequences according to the motion of scenes instead of intensity of scenes, which is necessary for generating mask of pedestrians and cars [An at. 2006]. For dynamic scenes, such as walking mass, the integration of virtual objects and real scenes requires higher quality of 3D model, and clear edges and pose of dynamic obstacles.

3.4 On-going Work and Perspectives

Julien Pettré visited the State Key Lab of CAD&CG in may 2007. During his stay, he initiated the collaboration with associate professor Dr. X. Qin. Objectives of the collaboration and working plan were defined. Dr. Qin and one of her PhD student Mr. Zhong Fan came for a one-month stay in the Bunraku team in October 2007. During their stay we started implementing required modules and validated data-flows between them. Bunraku is in charge of providing the tool for designing the motion of virtual humans in a given scene, with high level control of trajectories and ability to consider a large number of entities. CAD&CG is in charge to develop tools for extracting camera parameters and the trajectory of real moving obstacles in the scene. Firsts results and publication is planned for the beginning of year 2008.

The long-term objective of this collaboration is to develop tools for integrating real humans in real video scenes. Performances of algorithms need addressing in order to reach real-time integration of virtual humans in the scene and apply our solutions to the Virtual Reality field. Movie Industry is also targeted, which requires for enhanced video-matting techniques in order to superpose real and virtual objects seamlessly. Realistic rendering techniques of virtual humans are required as well.

3.5 State of the art References

  • [Arikan et al. 01] Arikan, O., S. Chenney, et al. (2001). Efficient Multi-Agent Path Planning. Computer Animation and Simulation '01, Springer-Verlag Wien New York.
  • [Azuma et al. 01] Ronald Azuma, Yohan Bailot, Reinhold Behringer, Steven Feiner, Simon Julier, and Blair MacIntyre. Recent advances in augmented reality. IEEE Computer Graphics and Applications, 2001,21(6): 34-47.
  • [Bandi et al. 98] Bandi, S. and D. Thalmann (1998). "Space Discretization for Efficient Human Navigation." Computer Graphics Forum 17(3): 195-206.
  • [Bayazit et al. 02] Bayazit, O. B., J.-M. Lien, et al. (2002). Roadmap-Based Flocking for Complex Environments. Pacific Conference on Computer Graphics and Applications.
  • [Choi et al. 03] Choi, M. G., J. Lee, et al. (2003). "Planning Biped Locomotion using Motion Capture Data and Probabilistic Roadmaps." ACM Transactions on Graphics 22(2): 182-203.
  • [Helbing 2000] Dirk Helbing, I. Farkas, and Tamas Vicsek. Simulating dynamical features of escape panic. Nature, 407 :487–490, 2000. 1.4.1 [Hartley et al. 00] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge University Press, 2000.
  • [Kallmann et al. 03] Kallmann, M., H. Bieri, et al. (2003). "Fully Dynamic Constrained Delaunay Triangulations." Geometric Modelling for Scientific Visualization.
  • [Kuffner et al. 98] Kuffner, J. J. (1998). Goal-directed navigation for animated characters using real-time path planning and control. CAPTECH '98: Workshop on Modelling and Motion Capture Techniques for Virtual Environments, Springer-Verlag.
  • [Latombe 91] Latombe, J. C. (1991). Robot Motion Planning. Boston, Boston: Kluwer Academic Publishers.
  • [Reynolds 1987] C. W. Reynolds. Flocks, herds, and schools : A distributed behavioral model. Computer Graphics, 21(4) :25–34, 1987. 1.4.2, 2.2.1
  • [Shao et al. 06] Shao, W. and D. Terzopoulos (2006). "Environmental Modeling for Autonomous Virtual Pedestrians." SAE 2005 Transactions Journal of Passenger Cars: Electronic and Electrical Systems 114(7): 735-742.
  • [Tecchia et al. 00] Tecchia, F. and Y. Chrysanthou (2000). Real time rendering of densely populated urban environments. RenderingTechniques '00 (10th EurographicsWorkshop on Rendering), Brno, Czech Republic, Springer-Verlag.
  • [Wiener et al. 2003] Jan M. Wiener and Hanspeter A. Mallot. ’fine-to-coarse’ route planning and navigation in regionalized environments. Spatial Cognition & Computation, 3(4) :331–358, 2003. 1.1.1, 1.1.2

3.7 References of the partners

  • [Lamarche et al. 04] Lamarche, F. and S. Donikian (2004). "Crowd of virtual humans: a new approach for real time navigation in complex and structured environments." Computer Graphics Forum 23(3).
  • [Legargeant 05] Legargeant, G. (2005). Subdivision spatiale d'environnement informé pour la navigation d'entités virtuelles. Rennes, INSA.
  • [Mars 06] Mars, C. (2006). Combiner topologie et sémantique dans un modèle de représentation des environnements virtuels. Rennes, INSA.
  • [Paris et al. 05] Paris, S., S. Donikian, et al. (2005). Towards more Realistic and Efficient Virtual Environment Description and Usage. V-Crowds, EPFL, Lausanne, Suisse, novembre 2005.
  • [Paris et al. 06] S.G. Paris, S. Donikian et N. Bonvalet. Environmental Abstraction and Path Planning Techniques for Realistic Crowd Simulation. Computer Animation and Virtual World Vol 17 N° 3-4, 2006.
  • [Paris et al. 2007] Sébastien Paris, Julien Pettré, and Stéphane Donikian. Pedestrian reactive navigation for crowd simulation : a predictive approach. Computer Graphics Forum, Eurographics’07, 2007. 7.1
  • [Pettré et al. 2006] Julien Pettré, Pablo de Heras Ciechomski, Jonathan Maïm, Barbara Yersin, Jean-Paul Laumond, and Daniel Thalmann. Real-time navigating crowds : scalable simulation and rendering. Computer Animation and Virtual Worlds, 17(3-4) :445–455, 2006. 1.5.2
  • [Pettré et al. 03] Pettré, J., J. P. Laumond, et al. (2003). A 2-Stages Locomotion Planner for Digital Actors. Proc. of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA'03).
  • [Pettré et al. 05] Pettré, J., J. P. Laumond, et al. (2006). A navigation graph for Real-Time crowd animation on multilayered and uneven terrain. V-Crowds, EPFL, Lausanne, Suisse, novembre 2005.
  • [Thomas et al. 00] Thomas, G. and S. Donikian (2000). Modelling virtual cities dedicated to behavioral animation. EUROGRAPHICS'2000, Interlaken, Switzerland, Blackwell Publishers.
  • [Thomas 05] Thomas, R. (2005). Modèle de mémoire de de carte cognitive spatiales: application à la navigation du piéton en environnement urbain. Rennes, Université de Rennes 1.
  • [Thomas et al. 06] R. Thomas et S. Donikian. A spatial cognitive map and a human-like memory model dedicated to pedestrian navigation in virtual urban environments. Spatial Cognition 2006, Brème, Allemagne, Dans Lecture Notes in Computer Science, Springer Verlag. 2006.
  • [An et al. 06] X. An, X. Qin, H. Bao. Automatic and robust segmentation of motion layers in image sequences. In IS&T/SPIE 18th Annual Symposium Electronic Imaging: Science and Technology, 15–19 January 2006, San Jose, California, USA.
  • [Qin et al. 04] X. Qin, E. Nakamae, W. Hua, Y. Nagai, and Q. Peng. Anti-Aliased Rendering of Water Surface. Journal of Computer Science and Technology, Sept. 2004, Vol.19, No.5, pp.626-632 (SCI, EI)
  • [Qin et al. 03] X. Qin, E. Nakamae, K. Tadamura and Y. Nagai. Fast Photo-Realistic Rendering of Trees in Daylight. Computer Graphics Forum (Eurographics 2003), Vol.22, No.3, pp.243-252.(SCI)
  • [Qin et al. 02] X. Qin, E. Nakamae and K. Tadamura. Automatically Compositing Still Images and Landscape Video Sequences. IEEE Computer Graphics and Applications, Vol.22, No.1, Jan./Feb., pp.68-78. (2002) (SCI)
  • [Qin et al. 99] X. Qin K. Tadamura, Y. Nagai and E. Nakamae. Creating a Precise Panorama from Panned Video Sequence Images. Journal of Information Processing, Vol.40, No.10, pp.3685-3693, Oct. (1999). [Nakamae et al. 02] E. Nakamae , X. Qin, K. Tadamura and Y. Nagai. Fast Rendering for Photo-Realistic Trees in Daylight. Animation Theaters of SIGGRAPH 2002.
  • [Nakamae et al. 01] E. Nakamae, X. Qin and K. Tadamura. Rendering of Landscapes for Environmental Assessment. Landscape and Urban Planning, Volume 54, Issues 1-4, 25 May, Pages 19-32. (2001) (SCI)
  • [Nakamae et al. 99] E. Nakamae, X. Qin, G. Jiao, P. Rokita and K. Tadamura. Computer Generated Still Images Composited with Panned/Zoomed Landscape Video Sequences. Journal of the Visual Computer, Vol.15, No.9, pp.429-442. (1999) (SCI)
  • [Tadamura et al. 01] K. Tadamura, X. Qin G. Jiao and E. Nakamae. Fast Rendering Water Surface for Outdoor Scenes. International Journal of Image and Graphics, Vol. 1, No. 2, pp.313-327. (2001)
  • [Tadamura et al. 01b] K. Tadamura, X. Qin, G. Jiao and E. Nakamae. Rendering Optical Solar Shadows Using Plural Sunlight Depth Buffers. Journal of the Visual Computer, Vol.17, No.2, pp.76-90. (2001) (SCI)
  • [Zhang et al. 06] G. Zhang, X. Qin, X. An, W. Chen, H. Bao. As-Consistent-As-Possible Compositing of Virtual Objects and Video Sequences. Computer Animation and Virtual Worlds journal ( CASA2006), 17:305-314.

 

o                 4. Potential for collaboration extensions

In the next year, our objective is to continue on the three topics defined in this proposal. In order to reach the objectives of the project, Nicolas Pronost will spend a 9 months stay at the State Key Lab of CAD&CG for a post doctoral position with Pr. Geng. We will apply for a joint PhD (the suggested student is Yijiang Zhang) co-directed by Pr. Qunsheng Peng and Stéphane Donikian on the topic of Augmented Video (with Dr. X. Qin and J. Pettré as co-advisors) and one PhD co-directed by Hongxin Zhang and Pascal Guitton on Intuitive Modelling. In order to evaluate the first results obtained in collaboration and define next steps of the work plan, a joint seminar will be organized at the end of year 2008.

 

We also want to extend our collaboration on new topics as well, with other colleagues from the CAD & CG State Key Lab, as for example Professor Zhigeng Pan. Pr. Pan research topics are human behaviour modeling, virtual reality and sports. Here is a list a possible extensions to our current collaboration:

  • high-level interfaces for commanding virtual humans. The main challenge here is to connect the current work to high-level behavioural model or to motion planning algorithms. This work could be associated to the current topic on augmented video with virtual humans (section 3),
  • motion database retrieval and modification from low-dimensional input (joystick equipped with accelerometers such as the Nintendo's Wii-mote),
  • Physics-based motion correction. Nicolas Pronost that worked on dynamic simulation during his PhD will visit the State Key Lab for one year. He will work on defining a biomechanical model of high-jumper.
  • Enhancing interactions between real objects and virtual objects. In section 3 we want to adress the problem of mixing virtual and real objects in a video scene on a Computer Animation point of view (providing tools for designed a scenario-guided action). Following this work, we have to face the complexity of this problem to provide interactive on-line tools, and to adress the virtual human realistic rendering problem to get believable and seamless mix of virtual and real object in the final scene.
  • Modeling of the appearance of an object. Both in realistic and expressive rendering, designing the appearance of multiple objects can be tedious. Based on the experience of intuitive 3D modeling and the introduction of humans in the design process, we will look into providing simplier and more efficient tools.

 

Budget prévisionnel 2008

1. Co-financement

La collaboration a déjà bénéficié de financements de la part du Consulat de France en Chine (10 k en 2006 et 6 k en 2007) et de la DREI à travers un accessit en 2007.

Nous souhaitons soumettre deux demandes de financement pour une thèse en co-tutelle dans le cadre de programmes de bourses en alternance (comme celui proposé par l'ambassade de France en Chine). Cette bourse comprend une allocation mensuelle pour l'étudiant, les frais de couverture sociale et la garantie de responsablilité civile. L'équipe associée pourra nous permettre de couvrir les frais de logement et de transports nécessaire pour la co-tutelle.

Un séminaire sino-français pour promouvoir de nouvelles collaborations entre la France et la Chine sera organisé par Pr. Q. Peng durant l'année 2008. Ce séminaire bénéficie du soutien financier du Consulat de France à Shanghai à une hauteur estimée de 10 k€.

Il est prévu de déposer un dossier coté chinois l’année prochaine pour obtenir des financements complémentaires pour cette collaboration auprès de la NSFC et auprès du Ministère Chinois des Sciences et Technologies. Notre partenaire chinois est confiant sur la probabilité d’obtenir l’un des deux financements du fait que leur laboratoire est un laboratoire clé d’état, et le numéro un en Chine de son secteur. Par ailleurs, ils ont obtenu un financement pour la venue de Nicolas Pronost en séjour post-doctoral.

ESTIMATION PROSPECTIVE DES CO-FINANCEMENTS

Organisme

Montant

 NSFC

 40-60 k€

 OU Ministère Chinois des Sciences et Technologies

 80-100 k€

 Consulat de France en Chine (2008) séminaire sino-français

10 k€

Ambassade de France en Chine (2008-2011) co-tutelle

19 k€

Séjour Post-doctoral N. Pronost

4 k

 

 

Total

73-133 k€

2. Echanges

Description des échanges prévus dans les deux sens : accueil de chercheurs de votre partenaire et missions INRIA vers votre partenaire.
Motivez l'utilité et l'intérêt spécifique des échanges et la complémentarité des équipes.
Précisez s'il s'agit de chercheurs confirmés ou de juniors (stagiaires, doctorants, post-doctorants). Spécifiez si ces échanges ont lieu dans le cadre d'un travail scientifique, d'organisation d'événements conjoints, de séminaires, tutoriels ou écoles, de formation par la recherche : indiquez les étudiants impliqués dans la collaboration, donnez une estimation de leur nombre de chaque côté et précisez si des thèses -éventuellement en co-tutelle- sont prévues (pour chaque échange, précisez la durée et le calendrier prévisionnel).

Plusieurs échanges sont prévus entre les équipes partenaires. Par thématique:

  • Edition de mouvements :
    • séjour post-doctoral de Nicolas Pronost en Chine de Novembre 2007 à Août 2008. Nous souhaitons soutenir financièrement ce séjour car le financement Chinois est limité à 420 € / mois,
    • visite de 2 semaines de Pr. W. Geng en France fin 2008,
  • Sketching / modélisation et édition de l'éclairement :
    • mission de 2 semaines d'un chercheur confirmé Français en Chine (X. Granier),
    • une visite de 3 semaines d'un chercheur confirmé Chinois en France (H. Zhang),
    • mission d'un mois d'un doctorant Français en Chine,
    • séjour de 4 mois d'un doctorant Chinois en France (en co-tutelle sous le programme de bourses doctorales en alternance),
  • Intégration d'humains virtuels dans des scènes réelles :
    • mission de 2 semaines d'un chercheur confirmé Français en Chine (J. Pettré),
    • séjour de 4 mois d'un doctorant Chinois en France (en co-tutelle sous le programme de bourses doctorales en alternance, Y Zhang),
    • visite de 2 semaines d'un chercheur confirmé Chinois en France (X. Qin)

Egalement, nous souhaitons financer les missions de chercheurs des équipes partenaires INRIA pour participer au séminaire sino-français organisé en Chine.

 

ESTIMATION DES DÉPENSES

Montant

 

Nombre

Accueil

Missions

Total

Chercheurs confirmés

13

 6k€

 23 k€

29 k€

Post-doctorants

1

 

11 k€

11 k€

Doctorants

 3

 10.5 k€

3 k€

13.5 k€

Stagiaires

 

 

 

 

Autre (précisez) :

 

 

 

 

Total

 

 

 

53.5 k€

 

 

- total des co-financements

 22.5 k€

 

 

Financement "Équipe Associée" demandé

 31 k€

Remarques ou observations :

Le montant des frais de missions comprend la participation aux séminaires qui se tiendront en Chine pour permettre la réduction des coûts totaux d'organisation.

 

 

© INRIA - mise à jour le 02/08/2006