See the demo
To access to the demo, please click on the image below.

sample image

Get the software
Nowadays, the software is not distributed. The goal of this site is to share the resaerch done on the project. The demonstration plateform shows in a concret way the results of this researchs.

›› Introduction

Adaptation to the execution context becomes a major issue with the progress of distributed systems as well as computations on grids or computer clusters. A self-adaptive system (or software) evalutes continuously its behavior. It changes its execution mode or performs a reconfiguration if a self-evaluation indicates that the targeted goal is not relevant or that better performance could be achieved. For each task that must be realized by the system several modules should be available. Moerover, the system has some knowledge about the properties of these module in order to let it build the execution chain that is the most relevant with respect to the context and let it launch the appropriate modules with the right parameters. During the last years research in self-adaptive systems has been very active in several domains: numerical software for dynamic environments (grids, networks), reconfigurable monitoring systems, pervasive computing needing software and hardware having recomposition or reconfiguration capabilities, web services, etc.

›› Contribution: piloting the diagnosis

Monitoring systems like Calicot must often face low quality signals. Taking the context into account may often improve the output of signal processing which will be processed in the diagnosis stage. We are studying how to adapt such monitoring systems to their execution context. For example, for signal processing the goal is to select and to adapt the algorithms with an analysis of the current context: noise type, information estimation, current faults or disorders predicted by the diagnosis.

The reasoning stages, such as diagnosis, can also be adapted, for example by taking into account the knowledge level that signal processing stages can extract or the knowledge level that is sufficient to reach a conclusion. To this end, we have proposed to use a pilot which operates on three levels:

  1. It chooses the algorithms and their associated parameters that will be used for input signal processing. Specialized rules are used by the pilot for choosing the most relevant algorithm and tuning with repect to the current context.
  2. It activates or deactivates processing tasks devoted to the extraction of specific events from input signals.
  3. It adapts the diagnosis to the actual resolution of signal processing.

In our approach, the diagnosis is achieved by chronicle recognition. The chronicles are organized in a hierarchical chronicle database: the most abstract chronicles contains less events and/or less attributes ; the most specific chronicles are more detailed i.e. they contain more events which are described by more attributes. The current set of chronicles is chosen with respect to the abstraction level imposed by the chronicle recognition task or by the signal processing task: on the one hand, a more abstract chronicle set needs less computing resources than more refined chronicles, on the other hand, the most abstract events can be computed more accurately on noisy signals. Using such a hierarchy of chronicle sets enables more intelligent resource management and diagnosis focus on most accurate pieces of information.