Vous êtes ici

Understanding Expressive Movement Knowledge: Developing a Taxonomy of Movement Feature

Equipe et encadrants
Département / Equipe: 
Site Web Equipe: 
http://www-expression.irisa.fr/
Directeur de thèse
Sylvie GIBET
Co-directeur(s), co-encadrant(s)
Larboulette
Contact(s)
NomAdresse e-mailTéléphone
Sylvie Gibet
sylvie.gibet@univ-ubs.fr
0763236977
Sujet de thèse
Descriptif

With the continual proliferation of new devices and techniques for motion capture, there is an essential need for the formalization of high-level semantic features describing human movement. The large variety of available sensors (motion capture, embedded or physiological sensors) provides various perspectives on movement information, but it also comes at the cost of a large disparity of data representations that makes movement analysis and interaction design difficult. Contrary to the field of audio signal processing, which benefits from a unified representation of the acoustic data, movement information is highly multidimensional: it comes in a variety of physical quantities (position, acceleration, muscular activation, etc.) with different meanings and specifications (scaling, sampling rate, etc.). As a result, researchers and professionals often need to “cook” computational features for a particular system and usage, and their adaptation to other technologies and use-cases can be tedious.

 

This thesis proposes to advance the formalization of higher level feature extraction relating to the semantic, expressive and affective qualities of human movement. By limiting the dependence on the sensor-level, this contribution can provide unprecedented possibilities for the design of gestural analysis/synthesis and full body interactions. This thesis will allow to explore the data and their existing representations and to go towards a homogeneous formalization of the features which takes into account the different semantic and expressive levels underlying the movements. The thesis work will consist in developing models for representing, storing, and accessing these heterogeneous and various data (captured from high definition sensors or low cost devices, containing all or part of the human body). A few movement databases are currently available around the world, but they offer different classes of data (everyday movements, artistic movements - dance or theater - sports movements, locomotion, etc.) that can be captured on one or several subjects, and include more or less expressive variations. Besides, the sets of features characterizing these data are multiple and multi-varied, and they depend both on the performed movements and on the objectives targeted (tracking, segmentation and annotation, recognition, synthesis, etc.).

 

Multiple objectives are challenged. They consist of:

 

  • a review and analysis of existing computational features, in relation with their application domains and tasks,
  • a formalization of motion processing techniques and the definition of feature sets that are relevant to particular types of movement,
  • the definition of metrics to characterize the selected movements and features,
  • the implementation of methods to evaluate quantitatively the features; we will focus in particular to classification methods which rely on the automatic selection of a subset of features,
  • the development of a methodology for evaluating computational features for expressive movement modeling,
  • the establishment of a set of benchmarks with several captured motion databases and sets of features depending on specific tasks (motion retrieval, expressive motion synthesis, recognition).

 

Two main types of applications will be considered in this thesis: (i) the recognition of expressive gestures exploiting the most significant subsets of features. In the context of interactive applications, real-time recognition approaches will be developed. (ii) The synthesis and simulation of virtual systems controlled by expressive gestural parameters derived from these same features. We can verify that these simulated systems - which can be 3D models (anthropomorphic or not), or sound synthesis models - actually reflect human expressive activity. To this end, it will be necessary to develop and adapt mapping models between interaction data and simulation data using machine learning algorithms. Finally, the evaluation of the synthesis models will also validate the choice of gestural control parameters.


The questions identified in this thesis constitute in themselves an original problem, valorisable in terms of academic articles (conferences, newspapers, etc.). In addition, the set of tools and methods developed will be valuable resources for all research teams working on human movement (benchmarking, sharing of data and evaluation models, etc.). Moreover, the project will draw upon the complementary work and expertise of both research teams (IRISA et School of Interactive Arts and Technology) to review and formalize the current knowledge in movement processing within the movement and computing community.

Bibliographie
  1. Michelle Karg, Ali-Akbar Samadani, Rob Gorbet, Kolja Kühnlenz, Jesse Hoey, Dana Kulic: Body Movements for Affective Expression: A Survey of Automatic Recognition and Generation. IEEE Trans. Affective Computing 4(4): 341-359 (2013)
  2. Pamela Carreno-Medrano, Sylvie Gibet, Pierre-François Marteau: End-effectors trajectories: An efficient low-dimensional characterization of affective-expressive body motions. ACII 2015: 435-441
  3. Caroline Larboulette, Sylvie Gibet: A review of computable expressive descriptors of human motion. MOCO 2015: 21-2
  4. C. Larboulette and S. Gibet. I Am a Tree: Embodiment Using Physically Based Animation Driven by Expressive Descriptors of Motion. In Proceedings of the International Symposium on Movement and Computing, Thessaloniki, Greece, Jul 2016.

 

Début des travaux: 
October, 1st, 2017
Mots clés: 
Expressive features, Movement knowledge, Movement language, Analysis, Synthesis
Lieu: 
IRISA - Campus de Tohannic, Université Bretagne Sud, Vannes