Integrating interactive devices with the user’s body

Séminaire
Date de début
Lieu
IRISA Rennes
Salle
Aurigny (D165)
Orateur
Pedro Lopes

Pedro Lopes (https://lab.plopes.org/), Associate Professor at the University of Chicago, will visit us on July 7th and will give the talk below at 15h30 in Salle Aurigny (D165). He focuses on integrating interfaces with the human body - exploring the interface paradigm that supersedes wearables.

Abstract: When we look back to the early days of computing, user and device were distant, often located in separate rooms. Then, in the ’70s, personal computers “moved in” with users. In the ’90s, mobile devices moved computing into users’ pockets. More recently, wearable devices brought computing into constant physical contact with the user’s skin. These transitions proved useful: moving closer to users allowed interactive devices to sense more of their users and act more personal. The main question that drives my research is: what is the next interface paradigm that supersedes wearable devices?
The primary way researchers have been investigating this is by asking where future interactive devices will be located with respect to the user’s body. Many posit that the next generation of interfaces will be implanted inside the user’s body. However, I argue that their location with respect to the user’s body is not the primary factor; in fact, implanted devices are already happening in that we have pacemakers, insulin pumps, etc. Instead, I argue that the key factor is how devices will integrate with the user’s biological senses and actuators. 
For the past ten years, I have been exploring how this body-device integration allows to engineer interactive devices that intentionally borrow parts of the body for input and output, rather than adding more technology to the body. For example, one such type of body-integrated devices, which I have advanced, are interactive systems based on electrical muscle stimulation. These devices can deliver haptic sensations (e.g., forces in virtual reality) by moving the user’s muscles using computer-controlled electrical impulses. One key advantage of body-device integration is that puts forward a new generation of miniaturized devices; allowing us to circumvent traditional physical constraints. For instance, in the case of our devices based on electrical muscle stimulation, they illustrate how to create realistic haptic feedback while circumventing the constraints imposed by robotic exoskeletons, which need to balance their output power against the size of their motors and batteries. Taking this further, we successfully applied this body-device integration approach to other modalities. For instance, we engineered a device that delivers chemicals to the user to create temperature sensations without the need to rely on cumbersome thermal actuators, such as air conditioners or heaters. My approach to miniaturizing devices is especially useful to advance mobile interactions, such as in virtual or augmented reality, where users have a desire to remain untethered & free. Secondly, I found that not only integrated devices are typically smaller, but they are beneficial as they enable new physical modes of reasoning with computers, going beyond just symbolic thinking (reasoning by typing and reading language on a screen). For example, we have engineered a set of devices that controls the user’s muscles to provide tacit information to the user, such as a device that controls how a user draws on paper, and a device that allows users to feel & control (without looking) information flowing through their bodies. Moreover, we found that integrating devices with the user’s body allows to give users new physical abilities. We have engineered a device that allows users to locate odor sources by “smelling in stereo” as well as a device that physically accelerates the user’s reaction time using muscle stimulation, which allows users to steer to safety or even catch a falling object that they would normally miss. While this integration between human and computer is beneficial (e.g., faster reaction time, realistic simulations in VR/AR, or faster skill acquisition), it also requires tackling new challenges, such as improving the precision of how we safely stimulate the body or the question of agency: do we feel in control when our body is integrated with an interface? Together with our colleagues in neuroscience, we have been measuring how our brain encodes agency to improve the design of this new type of integrated interfaces. We found that, even in the extreme case of our interfaces that electrically control the user’s muscles, it is possible to improve the sense of agency. More importantly, we found that it is only by preserving the user’s sense of agency that these integrated devices provide benefits even after the user takes them out. Finally, I believe that these bodily-integrated devices are the natural succession to wearable interfaces and allow us to investigate how interfaces will connect to our bodies in a more direct and personal way.