Creating compelling immersive situations requires to deal with several important elements. Amongst these, simulating and displaying convincing virtual humans naturally interacting with users is crucial to replicate real-life situations and foster collaboration, due to the importance of interpersonal interactions in our everyday life. In the last decade, great advances have been seen in the quality of such virtual characters, such as in terms of appearance (e.g. Metahuman characters) or the ability to control characters in game-like situations (e.g., Motion Matching [But15]). However, when it comes to interactions with users, such virtual characters typically only display limited, and often scripted, behaviours. For instance, virtual pedestrians are typically steered to avoid other characters and users using crowd simulation methods (e.g RVO [VLM08]), but will not react appropriately if users suddenly startle them, intrude their personal space, or act inappropriately.
The goal of this PhD is therefore to create virtual characters that display more appropriate reactions to a number of social interactions in immersive situations. The general idea is that virtual characters are often unresponsive to interactions, that would be considered unacceptable in real situations, which is detrimental for the believability of the immersive situations. A typical demonstrator of this work would be a virtual character demonstrating its annoyance at someone intruding their personal space, e.g. if coming too close, both through their body gesture and facial expressions. Such reactive characters are crucial for creating credible immersive situations, in line with the concept of Plausibilty Illusion presented by Slater et al. [Sla09,SSC10] (i.e., “the overall credibility of the scenario (in particular in relation to how a similar situation might be in physical reality”).
To reach this goal, this PhD will focus on two main research axes: 1) identifying the minimal set of reactive capabilities that a virtual character should possess for displaying believable social interactions, and 2) proposing novel methods to endow virtual characters with such natural reactions.
In the first axis of the PhD, the objective will be to identify the minimum set of reactive capabilities that a virtual character should display to naturally react to users. This will require to first establish a taxonomy of such reactive capabilities, including body, facial, emotional, vocal or even physical reactions to users. For instance, previous work investigated the possibility of enabling virtual characters to touch users depending on the context of the interaction [CPT21], which is seldom accounted for while having the potential to provide novel interpersonal interactions towards users. Similarly, virtual characters should be able to positively or negatively emotionally react to user interactions, both through their posture and facial expressions. In addition, identifying the reactive capabilities that virtual characters should display also raises the question of predicting the users’ intention through their motion, and therefore identifying key characteristic elements of the interaction, such as differentiating between the action of giving an object to a virtual character (for collaboration) vs. poking the virtual character (potentially inappropriate).
Overall, the results of this PhD will contribute to creating more compelling and reactive virtual characters, as well as to better understand the limits of appropriate and inappropriate interactions between real and virtual persons in immersive situations. The results will also be shared through demonstrators displaying virtual characters reacting more naturally at user social interactions, as well as learning novel reactions from user demonstrations.
This PhD will be supervised by members of the Virtus team, who have complimentary expertise in studying, modeling and evaluating virtual humans, especially by leveraging Virtual Reality, and will take place in the joined Inria / IRISA research centre located in Rennes.
The project will last for the 3 years of the PhD and should involve:
- Year 1: literature review including a taxonomy of reactive actions; proposition of a first set of minimal reactive capabilities and development of an initial reactive model (demonstrator D0)
- Year 2: evaluation of the initial reactive model; generalization of the reactive model, including exploring a virtual body ownership switching paradigm to enable users to demonstrate expected reactions (demonstrator D1)
- Year 3: evaluation of the generalized reactive model; finalization of the demonstrators (Dfinal)
[CPT21] F. Boucaud, C. Pelachaud, I. Thouvenin. Decision Model for a Virtual Agent that can Touch and be Touched. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '21), 2021.
[But15] M. Buttner. Motion Matching-The Road to Next-Gen Animation. Nucl.ai conference, 2015.
[OPS15] S. Osimo, R. Pizarro, B. Spanlang, et al. Conversations between self and self as Sigmund Freud—A virtual body ownership paradigm for self counselling. Sci Rep 5, 13899, 2015.
[Sla09] M. Slater. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos Trans R Soc Lond B Biol Sci., 364(1535), 2009.
[SSC10] M. Slater, B. Spanlang, D. Corominas. Simulating virtual environments within virtual environments as the basis for a psychophysics of presence. ACM Trans. Graph. 29(4), 2010. https://doi.org/10.1145/1778765.1778829
[VLM08] J. Van Den Berg, M. Lin, D. Manocha. 2008. Reciprocal velocity obstacles for real-time multi-agent navigation. In IEEE Conf. on Robotics and Automation, 1928–1935.