Working Group 2.2
25 - 29 September 2000, Oldenburg, Germany.
Abstracts of the talks
Davide Sangiorgi (joint work with Francesca Levi)
Two forms of interferences are individuated in Cardelli and Gordon's
Mobile Ambients (MA): plain interferences, which are similar to the
interferences one finds in CCS and pi-calculus; and grave
interferences, which are more dangerous and may be regarded as
programming errors. To control interferences, the MA movement
primitives are modified. On the new calculus, the Safe Ambients (SA),
a type system is defined that: controls the mobility of ambients;
removes all grave interferences. Other advantages of SA are: a useful
algebraic theory; programs sometimes more robust (they require milder
conditions for correctness) and/or simpler. These points are
illustrated on several examples.
Controlling Interference in Ambients
The talk is based on joint work with Francesca Levi (University of Pisa),
published in the proceedings of POPL'00.
Model checking is one of the most popular methods of automated
verification of concurrent systems, e.g., hardware circuits,
communication protocols, and distributed programs.
However, the practical applicability of this method is strongly restricted
by the state explosion problem, which is mainly caused by representing
concurrency of operations by their interleaving.
Therefore, many different reduction techniques have been introduced
in order to alleviate the state explosion.
The major methods include application of partial order reductions,
BDD-based symbolic storage, and
Recently, the interest in automated verification is moving towards
concurrent real-time systems.
Two main models for representing such systems are usually
timed automata and
Time Petri Nets.
The properties to be verified are expressed in either a standard temporal
logic like LTL and CTLS, or in its timed versions like
LTL$_\Delta$, MITL, and TCTL.
The lecture is on verification of timed and untimed branching
time temporal properties of Time Petri Nets.
Since Time Petri Nets have usually infinite state spaces,
abstraction techniques are used to represent these by finite ones.
To reduce the sizes of abstract state spaces
partial order reductions are exploited.
In this lecture an automata-theoretic approach to model-checking is
The theoretical framework developed by Kupferman-Vardi-Wolper is used,
where it is shown that the problem of branching-time model checking
can be translated to checking the non-emptiness of an alternating
1-letter word automaton.
The latter problem is known to have a linear complexity, which allows
to perform verification efficiently.
The outline of the lecture is as follows:
Efficient methods for model checking of Timed Petri Nets
The main novelty of the presented work relies on:
- Introduction to model checking -
- Reduction methods for untimed global state approaches:
on-the-fly, po-reductions, symbolic (BDDs), translation to SAT,
- Timed and untimed temporal logics: LTL, CTLS, ACTLS,
LTL$_\Delta$, MITL, and TCTL,
- Translations from temporal logics to automata,
- Timed Petri Nets,
- Abstractions preserving timed and untimed temporal logics,
- Partial order reductions for Time Petr Nets.
The lecture is based on the following papers of the author:
- improving the variant of the geometric region method
for defining abstract state spaces preserving properties of \CTLS\/
and \ACTLS\/ such that the structure of a verified formula
- showing how to extend the po-reduction
methods to deal with branching properties of Time Petri Nets.
- W. Penczek:
Partial order reductions for checking branching properties
of Timed Petri Nets, Proc. of CS\&P, Berlin, to appear, 2000.
- W. Penczek, M. Szreter, R. Gerth and R. Kuiper:
Improving Partial Order Reductions for Universal Branching
Timed Properties, Fundamenta Informaticae, to appear, 2000.
- M. Szreter and W. Penczek:
More than one, less than all: Linear to Branching revisited,
Proc. of CS\&P, Berlin, to appear, 2000.
- R. Gerth, R. Kuiper, D. Peled, and W. Penczek:
A Partial Order Approach to Branching Time Logic Model Checking,
Information and Computation, Vol. 150, No. 2, pp. 132-152, 1999.
Consider the hierarchy obtained by alternating least and greatest
fixpoints in mu-calculus terms. Alternatively this hierarchy can be
characterised by sizes of winning conditions in automata on trees.
The problem is: "given a regular tree language establish the least
number of fixpoint alternations needed to write a term defining the
Deciding low levels of the mu-calculus alternation hierarchy
I will show that it is EXPTIME-complete to decide if the language can
be written using the least fixpoints only. The same bound also holds
for terms with only the greatest fixpoints. The case of mixed (but not
alternating) fixpoints is already much more difficult to solve. The
subject is also a good occasion to give a very short summary of what
is known about the alternation hierarchy.
Sticks and Stones:
Making the Most of a Single Unbounded Dimension
Marta Z. Kwiatkowska
Verifying Soft Deadlines with probabilistic Timed Automata
Space-time duality may by used to transform simple
imperative programs into (pure) object-oriented programs. By
this mechanism the usual memory states describing (spatial)
interfaces of imperative programs are transformed into
temporal interfaces of interacting systems. As a
side-effect, this suggest a way to pass from the current
stream-based temporal interfaces to abstract, highly
structured temporal interfaces. In the same time, the
well-known techniques used for structured programms in
imperative programming area may be used to obtain structural
interaction systems. Finaly, but clearly a very important
point, the classical program verification techniques for
imerative programms may be used to get similar tools for
Using space-time duality to structure interaction systems
The aim of the talk is to present the space-time duality
mechanism and to look at a few aspects of the interaction
systems which have to be reconsidered in the view of
space-time duality. Also, a few technical ingredients, most
of them based on the network algebra approach, will be
For more on this, one may have a look on the book:
G. Stefanescu, Network Algebra, Springer-Verlag,
Hans Langmaack (work with M. Mueller-Olm and A. Wolff)
M. Mueller-Olm and A. Wolf have introduced the notions ``relative
correctness of programs'' and ``preservation of relative correctness
of program implementations and of compilations''. These notions
generalize the notions of total resp. partial correctness and their
preservations. The latter notions do not satisfy enough the
requirements in practical software engineering and program
compilation. Realistic software implementation and compiler
construction have to master in the same context both types of errors:
of acceptable, excusable, unavoidable errors on the one hand and of
unacceptable, inexcusable, avoidable ones on the other.
On preservation of relative correctness
It is quite known that total program correctness is more difficult to
prove than partial correctness. But it is most interesting to see that
for preservation of total resp. partial correctness the situation is
the other way round. This phenomenon has been first observed when
compiling while-programs into flat assembly code with unconditional
and conditional jumps [MOl95] and later when translating
ALGOL-like programs with nested blocks, parameterless procedures and
static scoping into flat assembly code augmented by subroutine and
return jumps [MOW00,Wol00]. Source languages are assumed to be
furnished with denotational (predicate transformer) semantics, target
languages with operational step semantics.
The decisive difference is evoked by the premis whether divergence
(infinite computation) is considered to be an unacceptable error
(generalized total correctness) or to be an acceptable one
(generalized partial correctness). The inductive proof of generalized
total correctness preservation utilizers just a series of algebraic
laws for assembly instructions semantics. But above that, proof of
preservation of generalized partial correctness has to exploit a
fixpoint characterization of the operational target semantics. In case
divergence is unacceptable operational target semantics is indeed the
least fixpoint of a certain functional, otherwise the largest fixpoint
Easiness of proof might be an indication why software engineering in
the past has favoured preservation of total correctness for software
implementation. But in usual information processing outside process
programming preservation of (generalized) partial correctness is at
least as important, especially because realistic target machines with
their finite resources cannot fulfill all total program correctness
Most realistic compilers do a lot of code optimizations like dead or
redundant code elimination, code motion or unswitching. But often
language and compiler manuals do not inform the users decently enough
about correctness of optimizations, namely what conditions source
programs must fulfill and what consequences on computing results are
to be expected. Relative program correctness and its preservartion
allow very precise investigations and delineations of these conditions
and consequences [MOW99,MOW00,Wol00].
- M. Mueller-Olm. An Exercise in Compiler
Verification. Kiel, 1995. Available from the author.
- M. Mueller-Olm. Modular Compiler Verification: A
Refinement-Algebraic Approach Advocating Stepwise Abstraction. LNCS,
Vol. 1283, Springer-Verlag, 1997.
- M. Mueller-Olm and A. Wolf. On Excusable and Inexcusable
Failures: On an Adequate Notion of Translation Correctness. In J. Wing,
J. Woodcock and J. Davies (eds.), FM '99 - Formal Methods. LNCS, vol. 1709,
1107 - 1127, Springer-Verlag, 1999.
- M. Mueller-Olm and A. Wolf. On the Translation of
Procedures to Finite Machines: Abstraction allows a Clean Proof. In
G. Smolka (ed.) Programming Languages and Systems. Proc. ESOP
2000, LNCS, vol. 1782, 290 - 304, Springer-Verlag, 2000.
- A. Wolf. Weakest Relative Precondition Semantics:
Balancing Approved Theory and Realistic Translation Verification. Univ. Kiel,
dissertation, subm. 2000.
Compositional Model Checking
Juri Vain (joint work with O. Owe, E.B. Johnsen, E. Munthe-Kaas)
Fault tolerant design in OUN specification language
We propose an extension of the asynchronous
$\pi$-calculus with a notion of random choice.
We define an operational semantics which distinguishes
between probabilistic choice, made internally by the process,
and nondeterministic choice, made externally by an adversary
scheduler. This distinction will allow us to reason about the
probabilistic correctness of algorithms under certain schedulers.
We show that in this language we can solve the electoral problem,
which was proved not possible in the asynchronous $\pi$-calculus.
Finally, we show an implementation of the probabilistic
asynchronous $\pi$-calculus in a Java-like language.
Probabilistic asynchronous $\pi$-calculus
Philippe Darondeau (joint work with B.Caillaud, L.Helouet, and G.Lesventes)
High Level Message Sequence Charts
(HMSC's) may be considered as specifications
for distributed systems. Realization thereof
may be seeked in distributable Petri nets, a
subclass of Petri nets where transitions are
endowed with locations and conflicts between
transitions are always local. Distributable
nets translate actually to equivalent (up to
branching bisimulation) families of automata
communicating by asynchronous messages. One
can unfortunately not decide whether a given
HMSC may be realized exactly (i.e. up to the
equality of languages) by some distributable
Petri net. We advocate therefore for taking
HMSC's as incomplete specifications. In this
case, each HMSC has an optimal realization
by a distributable Petri net, with the least
language including the language of the HMSC.
This realization may be computed and it may
also be model-checked. We set the question
whether relations between specifications and
realizations of reactive systems should not
be reconsidered in this way.
Realizing HMSC's by Distributable PN's
We present a purely functional language $\alpha$ that naturally contains an
imperative language as its sublanguage. $\alpha$ is pure in the sense
that (1) it is confulent and (2) it is a conservative extension of the
pure $\lambda$ calculus. The purity of $\alpha$ makes it a mathematically
and logically well founded language. For example, it becomes possible to
prove properties of programs written in $\alpha$ by equational reasoning.
$\alpha$: A purely functional language with imperative
The main characteristic of $\alpha$ is that it has environments as
first-class objects. By using first-class environments, we can encapsulate
imperative programs by treating imperative programs as environment
tranformers. In this way, side-effects caused by the execution of
imperative programs becomes invisible from outside.
We also discuss another usages of first-class environments. For example,
we show that given an abstract obtained by abstracting several variables,
we can instantiate any of these variables by specifying the name of
the variable to be instantiated and the value which will replace