While interacting with our environment, humans are driven by an infinite loop: the perception-action loop. This loop runs endlessly all day long, the brain receives and process external stimuli, determines which actions wants/needs to perform and executes them, such actions generate additional stimuli closing the loop. The perception action loop models a complex process which is bounded by the perceptual, cognitive and motor skills of every one of us. By iterating in this loop over and over, and thanks to the plasticity of our brains, humans are able to learn and adapt during our entire life. Indeed, this process allows a human to learn how to walk but also how to use any tool, from a physical hammer to a virtual flying interface while being immersed in a virtual environment.
In real life, this loop is non-mediated, we can directly perceive the real world and act on it. Yet, when immersed on a virtual or augmented reality we perceive and act indirectly. The virtual world is perceived thanks to a number of output devices (e.g. screen, headphones, haptic devices) and we are able to act thanks to a number of input devices (e.g. tracking system, buttons, joysticks). The perception-action loop in virtual and augmented reality can be decomposed in: (1) the user receives multi-sensory feedback from the virtual environment (perception), (2) decides and plans the action he/she wants to perform (cognition), (3) executes the planed actions (action) and (4) the system interprets and executes the user’s actions (commands). The execution of the commands will generate additional feedback (5), closing the loop. The user interface, commonly refereed as 3D user interface or 3DUI, becomes the tool that enables the user to interact and perceive the virtual environment, it translates the user’s actions into commands and generates feedback that can be perceived by the user. However, issues in any of these processes will hinder user interaction, having a non-negligible impact on the overall usability of the system. Using the perception-action loop as a canvas, this presentation will present a selection of my research activity since I started my PhD in 2006 in order to enhance such loop.
My main goal, as stated in the title if this manuscript, is the design of adaptive 3D user interfaces. Interfaces that are aware of the perception and interaction capabilities of the users. Interfaces that are able to efficiently support the user while performing 3D interaction tasks. Compared to real life interactions which are bounded by the laws of physics, interactions in virtual environments are only bounded by our imagination.
(The presentation will be in english)
- Michel Beaudouin-Lafon, Professor, Université Paris-Saclay
- Doug Bowman, Professor, Virginia Tech
- Anthony Steed, Professor, University College London
- Daniel Mestre, Research Director, CNRS
- François Chaumette, Research Director, Inria