Vous êtes ici

A Framework for Synthesizing Personalised Human Motions from Motion Capture Data and Perceptual Information

Equipe et encadrants
Département / Equipe: 
Site Web Equipe: 
Directeur de thèse
Co-directeur(s), co-encadrant(s)
Ludovic Hoyet
NomAdresse e-mailTéléphone
Ludovic Hoyet
Sujet de thèse


This project will target the personalisation of virtual human motions, which have become a requisite to create always more lifelike virtual worlds for industries ranging from entertainment to training and education. Although the visual realism of virtual human motions has drastically improved over the last decades, especially due to improvements in motion capture approaches, current animation techniques still create a certain uniformity of motion across characters. For single individuals (e.g., a main character), displaying the same generic motions for all users can limit their engagement, as motions are not personalised for any user. Similarly the absence of variation in large groups of individual also affects realism when they all move in the same manner. Tremendous amounts of manual artistic work can indeed create such variations, which undeniably improves overall realism (e.g., crowds in computer generated movies like Warcraft, Star Wars, The Hobbit), however it is still impossible to automatically create such levels of personalisation for interactive applications.

While this need for greater perceptual variety in virtual characters has been identified, existing approaches solely focus on variations of their visual aspect, i.e., appearance and shape [MLD*08,MLH*09]. However, motions are extremely important for humans to perform actions and to express themselves, in particular in nonverbal communication. More specifically, humans do not perform actions in precisely the same manner, not even in the same manner every time, which is why variations are important to create always more believable and expressive virtual humans. Therefore, this project aims at creating variety in human motions, in order to create a new generation of more realistic virtual characters. However, variety is not simply a reflection of random differences, but results from complex intra-individual (e.g., fatigue) and inter-individual (e.g., morphology, age, sex, emotions) differences, which are seldom taken into account today in virtual characters. As such differences can be difficult to quantify, this project will focus on how viewers perceive motion variations, to automatically produce natural motion personalisation accounting for inter-individual variations. In short, our goal is to automate the creation of motion variations to represent given individuals according to their own characteristics, and to produce natural variations that are perceived and identified as such by users.

Figure 1. In this project, we want to automatically personalise motions to account for inter-individual variations. Left – Examples of variations due to morphology: appropriate (blue) and inappropriate (green) motions for morphology (errors appear around legs and arms, but also in terms of speed, inertia, balance, etc.). Right – Examples of emotional variations depicting angry motions from four actors. Such examples can be captured, but are difficult to synthesise.


The goal of this PhD is therefore to propose new models for personalising human motions for different users and characters. Unlike existing approaches which rely on learning statistical models to create variations [HPP05,LBK09], the goal is to explore the creation of variations from a perception point of view.

In order to reach this objective, the first step of the PhD will consist in exploring what makes motions of individuals perceptually different. This will require to acquire a unique dataset of hundreds of human motions, and covering a large range of characteristics (e.g., age, morphology, gender). The PhD candidate will be trained on the different motion capture systems available in the MimeTIC, e.g., the Vicon motion capture system available in the Immermove platform, or the 2 Xsens full-body motion capture systems available in the team. This last system, commonly used in the video game and movie industries, provides high-quality data with very little post-processing, and can be quickly equipped (<10min). Once acquired, this unique dataset will enable us to conduct a large-scale perceptual experiment to identify how visually different the motions of each captured individual are from the same motions of each other individual in our database. While such an experiment will require to adapt the perceptual framework used in our previous works [HRZ*13] to deal with the amount of motions to visually compare, it will give us the unprecedented opportunity of combining this large amount of motion capture and perceptual information to automatically identify the motion features most contributing to visual motion variety, using statistical and data mining techniques.

Then, the second step of the PhD will consist in proposing new models for synthesising motion personalisations based on these identified relevant perceptual features, according to given individual characteristics. Because of the complexity of human motion, it is highly likely that creating such variations will depend on the type of motion considered. Therefore, to validate this first attempt at perceptually-based personalisation,  we propose to first explore simple models for cyclic motions on a main transversal scenario on locomotion, as locomotion is most commonly used in interactive applications, presents large potentials for personalisation (e.g., morphology, personality, fitness), and has large potential impacts on our targeted applications, i.e., interactive crowd animations for computer graphics. Finally, the last step of this PhD will consist in leveraging even further the synthesis of motion variations and to explore their creation for interactive large-scale scenarios, our targeted applications, where both performance and realism are critical. Such scenarios require to automatically and efficiently personalise the motions of large numbers of virtual humans, and therefore to identify the best means of producing variations for large groups of characters. In particular, we propose to identifying through perceptual experiments how, when and where to add variations in the motions of groups of individuals, and to build on these insights to design adaptive perception-based methods automatically providing the best trade-off between visual realism and computation load, as we demonstrated that introducing variety in crowd animations motions contribute to the overall naturalness of virtual scenes [HOKP16].

Results and Impact

The aim of this PhD is to propose new models to interactively personalise the motions of virtual characters, based on physiological parameters and drawing from perceptual insights to produce natural motions. The expected outcome of this project is therefore a breakthrough in the creation of natural virtual human content, where motion variety will be available in an automatic, visually realistic, and computationally efficient manner. This will be particularly impacting for large-scale simulations where variations still cannot be automated today, and are therefore manually created by artists or kept to the bare minimum. Such results will therefore have potential impacts at several levels, in particular on all industries requiring realistic human motions (e.g., games, movies, education or training) both for developers and artistic creation.

In terms of training, the PhD candidate will acquire skills in the domains of Character Animation, Motion Capture, Motion Analysis and User Experimentations, invaluable for career-development in Computer Graphics.


The candidate will work in the MimeTIC team in the joined Inria / IRISA research centre located in Rennes. Inria (www.inria.fr) and IRISA (http://www.irisa.fr/) are amongst the leading research centres in Computer Sciences in France, and the MimeTIC team is internationally recognised in the fields of Computer Graphics and Virtual Human Simulation. Research activities in MimeTIC focus on simulating virtual humans that behave in a natural manner and act with natural motions.

Requirements for candidacy

• Strong programming skills (C/C++ recommended)

• Basic knowledge of computer animation and graphics

• Interests in user experimentation and motion capture techniques


We are looking for motivated candidates, please send CV, a motivation letter, reference letters, and any relevant material to ludovic.hoyet@inria.fr

[HOKP16] Hoyet, A.-H. Olivier, R. Kulpa, J. Pettré. 2016.  Perceptual Effect of Shoulder Motions on Crowd Animations. In ACM Transaction on Graphics (SIGGRAPH 2016), 35(4).
[HRZ*13] L. Hoyet, K. Ryall, K. Zibrek, H. Park, J. Lee, J. Hodgins and C. O'Sullivan. 2013. Evaluating the Distinctiveness and Attractiveness of Human Motions on Realistic Virtual Bodies. In ACM Transactions on Graphics (SIGGRAPH Asia 2013), 32(6).
[HPP05] E. Hsu, K. Pulli and J. Popović. Style translation for human motion. ACM Trans. Graph. 24(3), 2005.
[LBK09] M. Lau, Z. Bar-Joseph and J. Kuffner. Modeling spatial and temporal variation in motion data. ACM Trans. Graph. (SIGGRAPH Asia 2009), 28(5).
[MLD*08] R. McDonnell, M. Larkin, S. Dobbyn, S. Collins and C. O'Sullivan. Clone Attack! Perception of Crowd Variety. In ACM Transactions on Graphics (SIGGRAPH 2008), 27(3), 2008.
[MLH*09] R. McDonnell, M. Larkin, B. Hernandez, I. Rudomin and C. O'Sullivan. Eye-catching Crowds: Saliency based Selective Variation. In ACM Transactions on Graphics (SIGGRAPH 2009), 28(3), 2009.


Mots clés: 
Virtual Characters, Human Motion, Perception, User Experimentation
IRISA - Campus universitaire de Beaulieu, Rennes