# BibTeX References

S. Ferré, O. Ridoux
"The Use of Associative Concepts in the Incremental Building of a Logical Context", In Int. Conf. Conceptual Structures, LNCS 2393, pp. 299-313, U. Priss, D. Corbett, G. Angelova (Eds), Springer, 2002.

Keywords: concept analysis, logic, information systems, learning, classification, context.
@InProceedings{FerRid2002,
author       = {S. Ferré and O. Ridoux},
title        = {The Use of Associative Concepts in the Incremental Building of a Logical Context},
booktitle    = {Int. Conf. Conceptual Structures},
pages        = "299--313",
year         = {2002},
editor       = "U. Priss, D. Corbett, G. Angelova",
series       = "LNCS 2393",
publisher    = "Springer",
langue       = "anglais",
url          = "ftp://ftp.irisa.fr/local/lande/sf-or-iccs02.ps.gz",
keywords     = {concept analysis, logic, information systems, learning, classification, context},
abstract     = "A formal context associates to objects a description that combines
automatically extracted properties (intrinsic) and manually assigned
ones (extrinsic). The extrinsic properties are expressed by users
according to intentions that are often subjective and changing, and
determine the classification and retrieval of objects. So, we find it
important to assist users in this task through the automatic
suggestion of extrinsic properties to be assigned and even the
discovery of rules to automate these assignements. The principle is to
learn from the description of existing objects the extrinsic
description of a new object.  Because of the changing nature of users'
intentions, the assistance given in the incremental building of a
logical context must be interactive. We present formal principles, and
an application to the classification of email messages. "
}


Thomas Genet, Valérie Viet Triem Tong
"Proving Negative Conjectures on Equational Theories using Induction and Abstract Interpretation", INRIA, technical report, nº RR-4576, 2002.

Keywords: Equational theories, proof by induction, abstract interpretation, rewriting.
@TECHREPORT{GenetVTTong-RR02,
AUTHOR       = "Genet, Thomas and Viet Triem Tong, Val\'{e}rie ",
TITLE        = "{P}roving {N}egative {C}onjectures on {E}quational {T}heories using {I}nduction and {A}bstract {I}nterpretation",
INSTITUTION  = {INRIA},
NUMBER       = {RR-4576},
YEAR         = 2002,
URL          = {ftp://ftp.irisa.fr/local/lande/tg-vvtt-inria02.ps.gz},
KEYWORDS     = {Equational theories, proof by induction, abstract
interpretation, rewriting},
ABSRACT      = {In this paper we present a method to prove automatically initial
negative properties on equational specifications. This method tends to
combine induction and abstract interpretation. Induction is performed
in a classical way using cover sets and rewriting. Abstract interpretation is done
using an additional set of equations used to approximate the initial
model into an abstract one. Like in the methods
dedicated to the proof by induction of positive properties, the
equational specification is supposed to be oriented into a
terminating, confluent and complete term rewriting system.}
}


Sébastien Ferré
"Complete and Incomplete Knowledge in Logical Information Systems", In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, LNCS 2143, pp. 782-791, Salem Benferhat, Philippe Besnard (Eds), Springer, 2001.

Keywords: modal logic, All I know, complete and incomplete knowledge, information system.
@InProceedings{Fer2001,
author       = {Sébastien Ferré},
title        = {Complete and Incomplete Knowledge in Logical Information Systems},
booktitle    = {Symbolic and Quantitative Approaches to Reasoning with Uncertainty},
pages        = "782--791",
year         = "2001",
editor       = "Salem Benferhat and Philippe Besnard",
series       = "LNCS 2143",
publisher    = "Springer",
keywords     = {modal logic, All I know, complete and incomplete knowledge, information system},
url          = "ftp://ftp.irisa.fr/local/lande/sf-ecsqaru2001.ps.gz",
langue       = "anglais",
abstract     = "We present a generalization of logic All I Know by presenting it as an extension of standard modal logics.
We study how this logic can be used to represent complete and incomplete knowledge in Logical Information Systems. In these information systems, a knowledge base is a collection of objects (e.g., files, bibliographical items) described in the same logic as used for expressing queries.
We show that usual All I Know (transitive and euclidean accessibility relation) is convenient for representing complete knowledge, but not for incomplete knowledge. For this, we use \emph{serial} All I Know (serial accessibility relation).",
refperso     = "200109D"
}


S. Ferré, O. Ridoux
"A Framework for Developing Embeddable Customized Logics", In Int. Work. Logic-based Program Synthesis and Transformation, LNCS 2372, pp. 191-215, A. Pettorossi (Eds), Springer, 2001.

Keywords: logic, composition, theorem prover, logic-based systems.
@InProceedings{FerRid2001b,
author       = {S. Ferré and O. Ridoux},
title        = {A Framework for Developing Embeddable Customized Logics},
booktitle    = {Int. Work. Logic-based Program Synthesis and Transformation},
year         = {2001},
editor       = {A. Pettorossi},
series       = {LNCS 2372},
publisher    = {Springer},
pages        = {191--215},
langue       = {anglais},
url          = {ftp://ftp.irisa.fr/local/lande/sf-or-lopstr01.ps.gz},
keywords     = {logic, composition, theorem prover, logic-based systems},
abstract     = {Logic-based applications often use customized logics which are
composed of several logics. These customized logics are also often
embedded as a black-box in an application. Their implementation
requires the specification of a well-defined interface with common
operations such as a parser, a printer, and a theorem prover. In order
to be able to compose these logics, one must also define composition
laws, and prove their properties. We present the principles of logic
functors and their compositions for constructing customized logics. An
important issue is how the operations of different sublogics
inter-operate.  We propose a formalization of the logic functors,
their semantics, implementations, and their composition.}
}


Sébastien Ferré, Olivier Ridoux
"Searching for Objects and Properties with Logical Concept Analysis", In International Conference on Conceptual Structures, LNCS 2120, pp. 187-201, Harry S. Delugach, Gerd Stumme (Eds), Springer, 2001.

Keywords: concept analysis, navigation, knowledge discovery, logical information system.
@InProceedings{FerRid2001,
author       = {Sébastien Ferré and Olivier Ridoux},
title        = {Searching for Objects and Properties with Logical Concept Analysis},
booktitle    = {International Conference on Conceptual Structures},
pages        = "187--201",
year         = {2001},
editor       = "Harry S. Delugach and Gerd Stumme",
series       = "LNCS 2120",
publisher    = "Springer",
url          = "ftp://ftp.irisa.fr/local/lande/sf-or-iccs2001.ps.gz",
keywords     = {concept analysis, navigation, knowledge discovery, logical information system},
langue       = "anglais",
abstract     = "Logical Concept Analysis is Formal Concept Analysis where logical
formulas replace sets of attributes.  We define a Logical Information
System that combines navigation and querying for searching for
objects.  Places and queries are unified as formal concepts
represented by logical formulas.  Answers can be both extensional
(objects belonging to a concept) and intensional (formulas refining a
concept).  Thus, all facets of navigation are formalized in terms of
Logical Concept Analysis.  We show that the definition of being a
refinement of some concept is a specific case of Knowledge Discovery
in a formal context.  It can be generalized to recover more classical
KD~operations like machine-learning through the computation of
necessary or sufficient properties (modulo some confidence), or
data-mining through association rules.",
refperso     = "200109C"
}


T. Genet, Valérie Viet Triem Tong
"Reachability Analysis of Term Rewriting Systems with Timbuk", In 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning, Lectures Notes in Artificial Intelligence, vol. 2250, pp. 691-702, Springer Verlag, 2001.

Keywords: Timbuk, Term Rewriting, Reachability, Tree Automata, Descendants, Program Verification.
@INPROCEEDINGS{GenetVTTong-LPAR01,
AUTHOR       = {Genet, T. and Viet Triem Tong, Val\'{e}rie},
TITLE        = {{R}eachability {A}nalysis of {T}erm {R}ewriting {S}ystems with {\sl Timbuk}},
BOOKTITLE    = {8th International Conference on Logic for
Programming, Artificial Intelligence, and Reasoning},
PUBLISHER    = {Springer Verlag},
SERIES       = {Lectures Notes in Artificial Intelligence},
YEAR         = {2001},
PAGES        = {691--702},
VOLUME       = 2250,
ABSTRACT     = {We present Timbuk -- a tree automata library -- which implements
usual operations on tree automata as well as a completion
algorithm used to compute an over-approximation of the set of
descendants R*(E) for a regular set E and a term rewriting
system R, possibly non linear and non terminating. On several
examples of term rewriting systems representing programs and systems to
verify, we show how to use Timbuk to construct their approximations
and then prove unreachability properties of these systems.},
KEYWORDS     = {Timbuk, Term Rewriting, Reachability, Tree Automata, Descendants, Program Verification},
URL          = {ftp://ftp.irisa.fr/local/lande/tg-vvtt-lpar01.ps.gz}
}


Thomas Genet, Valérie Viet Triem Tong
"Reachability Analysis of Term Rewriting Systems with Timbuk (extended version)", INRIA, technical report, nº RR-4266, 2001, Technical Report.
@TECHREPORT{GenetVTTong-RR01,
AUTHOR       = "Genet, Thomas and Viet Triem Tong, Val\'{e}rie ",
TITLE        = "{R}eachability {A}nalysis of {T}erm {R}ewriting {S}ystems with {\sl Timbuk} (extended version)",
INSTITUTION  = {INRIA},
TYPE         = {Technical Report},
NUMBER       = {RR-4266},
YEAR         = 2001,
URL          = {ftp://ftp.irisa.fr/local/lande/tg-vvtt-inria01.ps.gz},
ABSTRACT     = {We present Timbuk -- a tree automata library -- which implements
usual operations on tree automata as well as a completion
algorithm used to compute an over-approximation of the set of
descendants R*(E) for a regular set E and a term rewriting
system R, possibly non linear and non terminating. On several
examples of term rewriting systems representing programs and systems to
verify, we show how to use Timbuk to construct their approximations
and then prove unreachability properties of these systems.}
}


"Enforcing Trace Properties by Program Transformation", In Conference record of the 27th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2000.
@InProceedings{POPL00:Colcombet-Fradet,
title        = "Enforcing Trace Properties by Program Transformation",
author       = "Thomas Colcombet and Pascal Fradet",
url          = "http://www.irisa.fr/EXTERNE/projet/lande/colcombe/Publi/popl00-cf.ps.gz",
booktitle    = "Conference record of the 27th {ACM}
{SIGPLAN-SIGACT} Symposium on Principles of Programming
Languages",
abstract     = {We propose an automatic method
to enforce trace properties on programs.
The programmer specifies the property
separately from the program;
a program transformer takes the program and the property
and automatically produces another equivalent''
program satisfying the property.
This separation of concerns makes the program
easier to develop and maintain.
Our approach is both static and dynamic.
It integrates static analyses in order to avoid
useless transformations. On the other hand, it
never rejects programs but adds dynamic checks
when necessary.
An important challenge is to make this
dynamic enforcement as inexpensive as possible.
The most obvious application domain
is the enforcement of security policies.
In particular, a potential use of the method is the
securization of mobile code upon receipt.},
month        = {January},
year         = 2000
}


M. Ducassé, J. Noyé
"Tracing Prolog programs by source instrumentation is efficient enough", Journal of Logic Programming, 2000, To appear.

Keywords: Debugging, tracing, source to source transformation, benchmarking, Prolog..
@Article{ducasse2000,
Author       = {M. Ducass\'e and J. Noy\'e},
Title        = {Tracing {Prolog} programs by source instrumentation is
efficient enough},
Journal      = {Journal of Logic Programming},
Year         = {2000},
Note         = {To appear},
url          = {http://www.irisa.fr/lande/ducasse/ducasse-noye-manchester-98.ps.gz},
keywords     = {Debugging, tracing, source to source transformation, benchmarking, Prolog.},
abstract     = {Tracing by automatic program source
instrumentation has major advantages over compiled code
instrumentation: it is more portable, it benefits from many compiler
optimizations, it produces traces in terms of the original program,
and it can be tailored to specific debugging needs. The usual argument
in favor of compiled code instrumentation is its supposed
efficiency. We have compared the performance of two operational
low-level Prolog tracers with source instrumentation. We have executed
classical Prolog benchmark programs, collecting trace information
without displaying it. On average, collecting trace information by
program instrumentation is about as fast as using a low-level tracer
in one case, and only twice slower in the other. This is a minor
penalty to pay, compared to the advantages of the approach. To our
knowledge, this is the first time that a quantitative comparison of
both approaches is made for any programming language. }
}


Sébastien Ferré, Olivier Ridoux
"A Logical Generalization of Formal Concept Analysis", In International Conference on Conceptual Structures, Lecture Notes in Computer Science, nº 1867, pp. 371-384, Guy Mineau, Bernhard Ganter (Eds), Springer, August 2000.

Keywords: concept analysis, logic, context, information system.
@InProceedings{FerRid2000b,
author       = "Sébastien Ferré and Olivier Ridoux",
title        = "A Logical Generalization of Formal Concept Analysis",
booktitle    = "International Conference on Conceptual Structures",
pages        = "371--384",
year         = "2000",
editor       = "Guy Mineau and Bernhard Ganter",
number       = "1867",
series       = "Lecture Notes in Computer Science",
month        = {August},
publisher    = "Springer",
url          = "ftp://ftp.irisa.fr/local/lande/sf-or-iccs00.ps.gz",
keywords     = {concept analysis, logic, context, information system},
langue       = "anglais",
abstract     = "We propose a generalization of Formal Concept Analysis (FCA) in which sets of attributes are replaced by expressions of an almost arbitrary logic. We prove that all FCA can be reconstructed on this basis. We show that from any logic that is used in place of sets of attributes can be derived a contextualized logic that takes into account the formal context and that is isomorphic to the concept lattice. We then justify the generalization of FCA compared with existing extensions and in the perspective of its application to information systems."
}


Sébastien Ferré, Olivier Ridoux
"A File System Based on Concept Analysis", In International Conference on Rules and Objects in Databases, Lecture Notes in Computer Science, nº 1861, pp. 1033-1047, Yehoshua Sagiv (Eds), Springer, July 2000.

Keywords: concept analysis, logic, information system, file system.
@InProceedings{FerRid2000a,
author       = "Sébastien Ferré and Olivier Ridoux",
title        = "A File System Based on Concept Analysis",
booktitle    = "International Conference on Rules and Objects in Databases",
pages        = "1033--1047",
year         = "2000",
editor       = "Yehoshua Sagiv",
number       = "1861",
series       = "Lecture Notes in Computer Science",
month        = {July},
publisher    = "Springer",
url          = "ftp://ftp.irisa.fr/local/lande/sf-or-dood00.ps.gz",
keywords     = {concept analysis, logic, information system, file system},
langue       = "anglais",
abstract     = {We present the design of a file system whose organization is based on Concept Analysis à la Wille-Ganter''.
The aim is to combine querying and navigation facilities in one formalism.
The file system is supposed to offer a standard interface but the interpretation of common notions like directories is new.
The contents of a file system is interpreted as a Formal Context,
directories as Formal Concepts,
and the sub-directory relation as Formal Concepts inclusion.
We present an organization that allows for an efficient implementation of such
a Conceptual File System.}
}


P. Fradet, V. Issarny, S. Rouvrais
"Analyzing non-functional properties of mobile agents", In Proc. of Fundamental Approaches to Software Engineering, FASE'00, Lecture Notes in Computer Science, Springer-Verlag, march 2000.

Keywords: Mobile agents, RPC, performance, security.
@InProceedings{FASE00,
author       = {P. Fradet and V. Issarny and S. Rouvrais},
title        = {Analyzing non-functional properties of mobile agents},
booktitle    = {Proc. of Fundamental Approaches to Software Engineering, FASE'00},
year         = {2000},
PUBLISHER    = {Springer-Verlag},
SERIES       = {Lecture Notes in Computer Science},
keywords     = {Mobile agents, RPC, performance, security},
month        = {march},
abstract     = {The mobile agent technology is emerging as a useful new way of
building large distributed systems.
are mainly non-functional.
We believe that one of the reasons preventing the wide-spread use of mobile agents
is that non-functional properties are not easily grasped by system designers.
Selecting the right interactions to implement complex services is therefore a
n this paper, we tackle this problem by considering efficiency and security criteria.
We propose a language-based framework for the specification and implementation of complex services
built from interactions with primitive services.
Mobile agents, {\sc Rpc},
remote evaluation, or any combination of these forms of interaction can be expressed in this framework.
We show how to analyze (i.e. assess and compare) complex service
implementations with respect to efficiency and security properties.
This analysis provides guidelines to service designers,
enabling them to systematically select and combine different types of protocols
for the effective realization of interactions with primitive services.}
}


"Compilation of a Specialized Functional Language for Massively Parallel Computers", INRIA, technical report, nº 3894, march 2000.

Keywords: Skeletons, polytopes, data parallelism, cost analysis, program transformation.
@TechReport{RR3894,
author       = {P. Fradet and J. Mallet},
title        = {Compilation of a Specialized Functional Language for Massively Parallel Computers},
institution  = {INRIA},
year         = 2000,
keywords     = {Skeletons, polytopes, data parallelism, cost analysis, program transformation},
abstract     = {We propose a parallel specialized language that ensures portable and
cost-predictable implementations on parallel computers.
The language is basically a first-order, recursion-less, strict
functional language
equipped  with a collection of higher-order functions or skeletons.
These skeletons apply on (nested) vectors
and can be grouped in four classes:
computation, reorganization, communication, and mask skeletons.
The compilation process is described as a series of
to \SPMD-like functional programs which can be directly translated
into real parallel code.
The language restrictions enforce a programming discipline
whose benefit is to allow a static, symbolic, and accurate
cost analysis.
The parallel cost takes into account both load balancing
and communications, and can be statically
evaluated even when the actual size of vectors
or the number of processors are unknown.
It is used to automatically select
the best data distribution among
a set of standard distributions.
Interestingly, this work can be seen as a cross fertilization
between techniques developed within the FORTRAN  parallelization,
skeleton, and functional programming communities.},
number       = 3894,
month        = {march}
}


T. Genet, F. Klay
"Rewriting for Cryptographic Protocol Verification", In Proceedings 17th International Conference on Automated Deduction, Lecture Notes in Artificial Intelligence, vol. 1831, Springer-Verlag, 2000.
@INPROCEEDINGS{GenetKlay-CADE00,
AUTHOR       = {Genet, T. and Klay, F.},
TITLE        = {{R}ewriting for {C}ryptographic {P}rotocol {V}erification},
BOOKTITLE    = {Proceedings 17th International Conference on Automated
Deduction},
SERIES       = {Lecture Notes in Artificial Intelligence},
PUBLISHER    = {Springer-Verlag},
YEAR         = {2000},
VOLUME       = 1831,
abstract     = "On a case study, we present a new approach  for verifying  cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set.  Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved  sessions, and a theoretical property ensuring safeness of the approximation."
}


T. Genet, F. Klay
"Rewriting for Cryptographic Protocol Verification (extended version)", INRIA, technical report, nº 3921, 2000, Technical Report.
@TECHREPORT{GenetK-CNET99,
AUTHOR       = {Genet, T. and Klay, F.},
TITLE        = {{R}ewriting for {C}ryptographic {P}rotocol {V}erification (extended version)},
INSTITUTION  = {INRIA},
YEAR         = {2000},
TYPE         = {Technical Report},
NUMBER       = {3921},
url          = "ftp://ftp.irisa.fr/local/lande/tg-fk-inria00.ps.gz",
abstract     = "On a case study, we present a new approach  for verifying  cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set.  Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved  sessions, and a theoretical property ensuring safeness of the approximation."
}


F. Besson, T. Jensen, J. P. Talpin
"Polyhedral Analysis for Synchronous Languages", In International Static Analysis Symposium, SAS'99, Lecture Notes in Computer Science, vol. 1694, Gilberto Filé (Eds), Springer-Verlag, September 1999.

Keywords: synchronous languages, abstract interpretation, reachability analysis, infinite state systems.
@INPROCEEDINGS{SAS99:BJT,
AUTHOR       = "F. Besson and T. Jensen and J.P. Talpin",
TITLE        = "{P}olyhedral {A}nalysis for {S}ynchronous {L}anguages",
EDITOR       = {Gilberto Fil\'e},
PUBLISHER    = {Springer-Verlag},
YEAR         = {1999},
SERIES       = {Lecture Notes in Computer Science},
VOLUME       = {1694},
BOOKTITLE    = {International Static Analysis Symposium, SAS'99},
MONTH        = {September},
keywords     = {synchronous languages, abstract interpretation,
reachability analysis, infinite state systems},
abstract     = {We define an operational semantics for the Signal language
and design an analysis which allows to verify properties
pertaining to the relation between values of the numeric and boolean
variables of a reactive system.  A distinguished feature of the
analysis is that it is expressed and proved correct with respect to the source
program rather than
on an intermediate representation of the program. The analysis
calculates a safe approximation to the
set of reachable states by a symbolic fixed point computation in the
domain of convex polyhedra using a novel widening operator based on
the convex hull representation of polyhedra. }
}


M. Ducassé
"Abstract views of Prolog executions with Opium", In Learning to Build and Comprehend Complex Information Structures: Prolog as a Case Study, Cognitive Science and Technology, P. Brna, B. du Boulay, H. Pain (Eds), Ablex, 1999.

Keywords: Software engineering, Programming environment, Automated debugging, Trace abstraction mechanisms , Debugging language, Program behavior understanding, Prolog Debugging tool.
@InCollection{ducasse99b,
Author       = {M. Ducass\'{e}},
url          = {http://www.irisa.fr/lande/ducasse/ducasse-ablex99.ps.gz},
Title        = {Abstract views of {Prolog} executions with {Opium}},
BookTitle    = {Learning to Build and Comprehend Complex Information
Structures: Prolog as a Case Study},
Publisher    = {Ablex},
Series       = {Cognitive Science and Technology},
Year         = {1999},
Editor       = {P. Brna and B. du Boulay and H. Pain},
keywords     = {Software engineering, Programming environment,
Automated debugging, Trace abstraction mechanisms ,
Debugging language, Program behavior understanding,
Prolog Debugging tool},
abstract     = {Opium is a system for analysing and debugging Prolog
programs. Its kernel comprises an execution tracer
and a programming language with a set of primitives
for trace and source analysis. In this chapter we
show the power of Opium for supporting abstract
views of Prolog executions. Abstract views give
high-level points of view about executions. They
filter out irrelevant details; they restructure the
remaining information; and they compact it so that
the amount of information given at each step has a
reasonable size. The examples of abstract views
given in the following are a goal execution profile,
some data abstractions, an instantiation profile, a
failure analysis, a loop analysis, and a kind of
explanation for an expert system written in
Prolog. }
}


M. Ducassé
"Opium: An extendable trace analyser for Prolog", The Journal of Logic programming, 1999, Special issue on Synthesis, Transformation and Analysis of Logic Programs, A. Bossi and Y. Deville (eds), Also Rapport de recherche INRIA RR-3257 and Publication Interne IRISA PI-1127.

Keywords: Software Engineering, Automated Debugging, Trace Query Language, Program Execution Analysis, Abstract Views of Program Executions, Prolog.
@Article{ducasse99,
Author       = {M. Ducass\'{e}},
url          = {http://www.irisa.fr/lande/ducasse/ducasse-jlp-98.ps.gz},
Title        = {Opium: An extendable trace analyser for {Prolog}},
Journal      = {The Journal of Logic programming},
Year         = {1999},
Note         = {Special issue on Synthesis, Transformation and
Analysis of Logic Programs, A. Bossi and Y. Deville
(eds), Also Rapport de recherche INRIA RR-3257 and
Publication Interne IRISA PI-1127},
keywords     = {Software Engineering, Automated Debugging, Trace
Query Language, Program Execution Analysis, Abstract
Views of Program Executions, Prolog},
abstract     = {Traces of program executions are a helpful source of
information for program debugging. They, however,
give a picture of program executions at such a low
level that users often have difficulties to
interpret the information. Opium, our extendable
trace analyzer, is connected to a standard''
Prolog tracer. Opium is programmable and
extendable. It provides a trace query language and
abstract views of executions. Users can therefore
examine program executions at the levels of
abstraction which suit them. Opium has shown its
capabilities to build abstract tracers and automated
depth the trace query mechanism, from the model to
its implementation. Characteristic examples are
detailed. Extensions written so far on top of the
trace query mechanism are listed. Two recent
extensions are presented: the abstract tracers for
the LO (Linear Objects) and the CHR (Constraint
Handling Rules) languages. These two extensions were
specified and implemented within a few days. They
show how to use Opium for real applications. }
}


M. Ducassé
"Coca: An automated Debugger for C", In Proceedings of the 21st International Conference on Software Engineering, pp. 504-513, ACM Press, May 1999.

Keywords: Software engineering, Programming environment, Automated debugging, Trace query mechanism, Debugging language, Program behavior understanding, C Debugging tool. .
@InProceedings{ducasse99c,
Author       = {M. Ducass\'{e}},
Title        = {Coca: An automated Debugger for {C}},
Pages        = {504-513},
BookTitle    = {Proceedings of the 21st International Conference on Software Engineering},
Year         = {1999},
Publisher    = {ACM Press},
Month        = {May},
Annote       = {Also RR-3489},
url          = {http://www.irisa.fr/lande/ducasse/coca-irisa-98.ps.gz},
keywords     = {Software engineering, Programming environment,
Automated debugging, Trace query mechanism, Debugging language,
Program behavior understanding, C Debugging tool.  },
abstract     = {We present Coca, an automated debugger for C,
where the breakpoint mechanism is based on events related to language
constructs. Events have semantics whereas source lines used by most
debuggers do not have any. A trace is a sequence of events. It can be
seen as an ordered relation in a database. Users can specify precisely
which events they want to see by specifying values for event
attributes. At each event, visible variables can be queried. The trace
query language is Prolog with a handful of primitives. The trace query
mechanism searches through the execution traces using both control
flow and data whereas debuggers usually search according to either
control flow or data. As opposed to fully relational'' debuggers
which use plain database querying mechanisms, Coca trace querying
mechanism does not require any storage. The analysis is done on the
fly, synchronously with the traced execution. Coca is therefore more
powerful than source line'' debuggers and more efficient than
relational debuggers. }
}


M. Ducassé
"An introduction to the B formal method", In Proceedings of the 9th International Workshop on LOgic-based Program Synthesis and TRansformation, pp. 23-30, A. -L. Bossi (Eds), Universita' Ca' Foscari di Venezia, September 1999, Technical report CS-99-16, Slides.
@InProceedings{ducasse99d,
Author       = {M. Ducass\'e},
Title        = {An introduction to the B formal method},
Pages        = {23--30},
BookTitle    = {Proceedings of the 9th International Workshop on
LOgic-based Program Synthesis and TRansformation},
Year         = {1999},
Editor       = {A.-L. Bossi},
Publisher    = {Universita' Ca' Foscari di Venezia},
Month        = {September},
Note         = {Technical report CS-99-16, Slides}
}


S. Ferré, O. Ridoux
"Une généralisation logique de l'analyse de concepts logique", Inria, Institut National de Recherche en Informatique et en Automatique, technical report, nº RR-3820, December 1999, Technical Report.

Keywords: concept analysis, concept lattice, logic, context, information systems, querying, browsing.
@TechReport{FerRid1999,
pages        = "26 p.",
type         = "Technical Report",
number       = "RR-3820",
institution  = "Inria, Institut National de Recherche en Informatique
et en Automatique",
title        = "Une généralisation logique de l'analyse de concepts
logique",
author       = "S. Ferré and O. Ridoux",
year         = "1999",
month        = {December},
url          = "ftp://ftp.irisa.fr/local/lande/sf-or-rr3820.ps.gz",
keywords     = {concept analysis, concept lattice, logic, context,
information systems, querying, browsing},
abstract     = {Nous proposons une généralisation de l'analyse de concepts
formels (ACF) dans laquelle les ensembles d'attributs sont remplacés
par des expressions d'une logique presque arbitraire. Nous prouvons que
toute l'ACF peut être reconstruite sur cette base. Nous montrons qu'à
partir de toute logique utilisée à la place des ensembles d'attributs,
on peut dériver une logique contextualisée qui prend en compte le
contexte
formel et qui est isomorphe au treillis de concepts. Nous comparons
ensuite la généralisation de l'ACF aux extensions qui y ont déjà été
apportées. Enfin, nous présentons nos perspectives d'application aux
systèmes d'information.},
annote       = {FCA,logic,SI}
}


P. Fradet, D. Le Métayer, M. Périn
"Consistency checking for multiple view software architectures", In Proceedings of the joint 7th European Software Engineering Conference (ESEC) and 7th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-7), LNCS, Springer-Verlag, Toulouse, France, September 1999.

Keywords: software architecture, multiple views, UML diagrams, formal semantics, families of graphs, static consistency verification.
@InProceedings{Fradet:FSE99:ConsistencyChecking,
author       = {Fradet, P. and Le~Métayer, D. and Périn, M.},
title        = {Consistency checking for multiple view software
architectures},
booktitle    = {Proceedings of the joint 7th European Software
Engineering Conference (ESEC) and 7th ACM SIGSOFT
International Symposium on the Foundations of
Software Engineering (FSE-7)},
year         = {1999},
series       = {LNCS},
publisher    = {Springer-Verlag},
month        = {September},
keywords     = {software architecture, multiple views, UML diagrams,
formal semantics, families of graphs,
static consistency verification},
abstract     = {Consistency is a major issue that must be properly
architectures. In this paper, we provide a formal
definition of views expressed graphically using
diagrams with multiplicities and propose a simple
algorithm to check the consistency of diagrams. We
also put forward a simple language of constraints to
express more precise (intra-view and inter-view)
consistency requirements. We sketch a complete
decision procedure to decide whether diagrams
satisfy a given constraint expressed in this
language. Our framework is illustrated with excerpts
of a case study: the specification of the
architecture of a train control system.},
http://www.irisa.fr/EXTERNE/projet/lande/LeMetayer.html,
http://www.irisa.fr/EXTERNE/projet/lande/mperin/mperin.html}
}


"An aspect language for robust programming", In Workshop on Aspect-Oriented Programming, ECOOP 1999, juillet 1999.

Keywords: aspect-oriented programming, robustness, exceptions, program transformation, program analysis.
@InProceedings{AOP99,
author       = "P. Fradet and M. {S\"udholt}",
title        = "An aspect language for robust programming",
booktitle    = "{\it Workshop on Aspect-Oriented Programming, ECOOP 1999}",
keywords     = {aspect-oriented programming, robustness, exceptions, program transformation, program analysis},
MONTH        = {juillet},
year         = "1999",
abstract     = {In this position paper, we advocate the use of an aspect language
for robust programming.
AOP is particularly appealing for this task because robustness
crosscuts traditional structuring means. Consider, for instance,
the problem of ensuring that a global index remains in a given range.
The code needed to check such an invariant is
typically scattered all over the program.
The paper presents an
example-driven introduction of the proposed aspect language for program
robustness; it then discusses its semantics and
implementation and suggests extensions.}
}


V. Gouranton, D. Le Métayer
"Dynamic slicing: a generic analysis based on a natural semantics format", Journal of Logic and Computation, vol. 9, nº 6, pp. 835-871, December 1999.
@ARTICLE{Gouranton:99:DynamicSlicing,
AUTHOR       = {V. Gouranton and D. {Le M\'etayer}},
JOURNAL      = {Journal of Logic and Computation},
MONTH        = {December},
NUMBER       = {6},
PAGES        = {835-871},
TITLE        = {Dynamic slicing: a generic analysis based on a natural
semantics format},
VOLUME       = {9},
YEAR         = {1999},
ABSTRACT     = {Slicing analyses have been proposed for
different programming languages. Rather than
defining a new analysis from scratch for each
programming language, we would like to specify
such an analysis once for all, in a
language-independent way, and then specialise
it for different programming languages. In
order to achieve this goal, we propose a notion
of natural semantics format and a dynamic
slicing analysis format. The natural semantics
format formalises a class of natural semantics
and the analysis format is a generic,
language-independent, slicing analysis. The
correctness of the generic analysis is
established as a relation between the
derivation trees of the original program and
the slice. This generic analysis is then
instantiated to several programming languages
conforming the semantics format (an imperative
language, a logic programming language and a
functional language), yielding a dynamic
slicing analyser for each of these languages.}
}


E. Jahier, M. Ducassé
"Opium-M 0.1 User and Reference Manuals", IRISA, Rennes, March 1999.

Keywords: Logic programming, Mercury, Trace analyser, Trace query language, Automated debugging, User manual, Reference manual.
@Manual{jahier99,
title        = "Opium-M 0.1 User and Reference Manuals",
author       = "E. Jahier and M. Ducassé",
year         = "1999",
month        = "March",
keywords     = "Logic programming, Mercury, Trace analyser,
Trace query language, Automated debugging, User manual,
Reference manual",
abstract     = "This document gathers the user manual and the reference
manual of Opium-M, an analyser of execution traces of Mercury
Programs. Opium-M is an adaptation to Mercury of Opium a
trace analyser for Prolog.
Mercury is a new logic programming language.  Its type, mode
and determinism declarations enable codes to be generated
that is at the same time more efficient and more reliable
than with current logic programming languages. The
deterministic parts of Mercury programs are as efficient as
their C counterparts. Moreover, numerous mistakes are
detected at compilation time. However, our industrial partner
experience shows that the fewer remaining mistakes, the
harder they are to be diagnosed. A high-level debugging tool
was thus necessary.
Program execution traces given by traditional debuggers
provide programmers with useful pieces of
information. However, using them requires to analyse by hand
huge amounts of information.
Opium-M is connected to the traditional tracer of Mercury, it
allows execution trace analyses to be automated. It provides
a relational trace query language based on Prolog which
enables users to specify precisely what they want to see in
the trace. Opium-M, then, automatically filters out
information irrelevant for the users."
}


E. Jahier, M. Ducassé
"A generic approach to monitor program executions", In Proceedings of the International Conference on Logic Programming, pp. , D. De Schreye (Eds), MIT Press, November 1999.

Keywords: Monitoring, Trace analysis, Flexibility, Logic programming, Mercury.
@InProceedings{jahier99d,
Author       = {E. Jahier and M. Ducass\'{e}},
Title        = {A generic approach to monitor program executions},
Pages        = {},
BookTitle    = {Proceedings of the International Conference on Logic Programming},
Year         = {1999},
Editor       = {D. De Schreye},
Publisher    = {MIT Press},
Month        = {November},
Location     = {mireille: whole proceedings},
url          = {ftp://ftp.irisa.fr/local/lande/ej-md-iclp99.ps.gz},
keywords     = {Monitoring, Trace analysis, Flexibility, Logic programming,
Mercury},
abstract     = {
Monitoring requires to gather data about executions. The monitoring
functionalities currently available are built on top of ad hoc
instrumentations. Most of them are implemented at low-level; in any
case they require an in-depth knowledge of the system to
instrument. The best people to implement these instrumentations are
generally the implementors of the compiler. They, however, cannot
decide which data to gather. Indeed, hundreds of variants can be useful
and only end-users know what they want.

In this article, we propose a primitive which enables users to easily
specify what to monitor. It is built on top of the tracer of the
Mercury compiler.  We illustrate how to use this primitive on two
different kinds of monitoring. Firstly, we implement monitors that
collect various kinds of statistics; each of them is well-known, the
novelty is that users can get exactly the variants they need. Secondly,
we define two notions of test coverage for logic programs and show how
to measure coverage rates with our primitive. To our knowledge no
definition of test coverage exist for logic programming so far.  Each
example is only a few lines of Mercury. Measurements show that the
performance of the primitive on the above examples is acceptable for an
execution of several millions of trace events. Our primitive, although
simple, lays the foundation for a generic and powerful monitoring
environment.  }
}


E. Jahier, M. Ducassé
"Un traceur d'exécutions de programmes ne sert pas qu'au débogage", In Actes des Journées francophones de Programmation Logique et par Contraintes, F. Fages (Eds), Hermès, Lyon, juin 1999.

Keywords: Dynamic analysis, Trace analysis, Monitoring, Measure of test coverage, Logic programming, Mercury.
@InProceedings{jahier99b,
author       = {E. Jahier and M. Ducass\'e},
title        = {Un traceur d'ex\'{e}cutions de programmes ne sert pas qu'au d\'{e}bogage},
booktitle    = {Actes des Journ\'{e}es francophones de Programmation Logique
et par Contraintes},
editor       = {F.~Fages},
year         = 1999,
month        = {juin},
publisher    = {Herm\{e}s},
url          = {ftp://ftp.irisa.fr/local/lande/ej-md-jfplc99.ps.gz},
keywords     = {Dynamic analysis, Trace analysis, Monitoring,
Measure of test coverage, Logic programming, Mercury},
abstract     = {In this paper we show that a tracer with a trace analyser
can be used to achieve more than debugging. We first illustrate how to
compute coverage ratios for test cases.  We also give 4 examples to
instrumentations, which is currently the case for such tools, one can
use a uniform environment which allows a synergy between the tools to
take place. As a matter of fact, while studying the test coverage
measurement we enriched the trace information, to the benefit of the
other tools.  Moreover, ad hoc instrumentations require an in depth
knowledge of the system to instrument, either at low level or by source
to source transformation. Even it is not technically difficult, it
always requires a significant programming effort. On the opposite, with
our approach, the instrumentation is generic. It is done once for all,
and the specific analyses can be relatively simple. The examples of
this article consist of less than a dozen of Prolog lines each.}
}


E. Jahier, M. Ducassé, O. Ridoux
"Specifying trace models with a continuation semantics", In Proc. of ICLP'99 Workshop on Logic Programming Environments, M. Ducassé, A. Kusalik, L. Naish, G. Puebla (Eds), 1999, LPE'99.

Keywords: trace models, continuation semantics, specification, validation, Logic programming.
@InProceedings{jahier99e,
Author       = {E. Jahier and M. Ducass\'e and O. Ridoux},
Title        = {Specifying trace models with a continuation semantics},
booktitle    = {Proc. of ICLP'99 Workshop on Logic Programming Environments},
editor       = {M. Ducass\'{e} and A. Kusalik and L. Naish and G. Puebla},
year         = {1999},
note         = {LPE'99},
url          = {ftp://ftp.irisa.fr/local/lande/ej-md-or-lpe99.ps.gz},
keywords     = {trace models, continuation semantics, specification, validation,
Logic programming},
abstract     = {In this article, we have give a formal specification of Byrd's
box model and we show how this specification can be extended to specify
richer trace models. We have also sho how these specifications can be
executed by a direct translation into lambda-Prolog, leading to a
be used both to experiment various trace models and to validate the
different event specifications. Hence we have a formal framework to
specify and prototype trace models.}
}


D. Le Métayer, V. -A. Nicolas, O. Ridoux
"Verification by testing for recursive program schemes", In LOPSTR'99 (International Workshop on Logic Program Synthesis and Transformation), Springer-Verlag, LNCS, 1999, To appear.

Keywords: Software engineering, program verification, white-box testing, automated test data generation, program analysis, program schemes.
@InProceedings{dlm-van-or-lopstr99,
author       = {D. Le Métayer, V.-A. Nicolas and O. Ridoux},
title        = {Verification by testing for recursive program schemes},
booktitle    = {LOPSTR'99 (International Workshop on Logic Program Synthesis and
Transformation)},
year         = {1999},
publisher    = {Springer-Verlag, LNCS},
note         = {To appear},
url          = {http://www.irisa.fr/lande/vnicolas/articleLNCS99.pdf},
keywords     = {Software engineering, program verification, white-box testing,
automated test data generation, program analysis, program schemes},
abstract     = {In this paper, we explore the testing-verification relationship with
the objective of mechanizing the generation of test data. We
consider program classes defined as recursive program schemes and we
show that complete and finite test data sets can be associated with
such classes, that is to say that these test data sets allow us to
distinguish every two different functions in these schemes. This
technique is applied to the verification of simple properties of
programs.}
}


S. Mallet, M. Ducassé
"Generating deductive database explanations", In Proceedings of the International Conference on Logic Programming, pp. , D. De Schreye (Eds), MIT Press, November 1999.

Keywords: deductive databases, debugging, trace, operational semantics, multi-SLD-AL, meta-interpreter, substitution set, instrumentation.
@InProceedings{sm-iclp99,
Author       = {S. Mallet and M. Ducass\'{e}},
Title        = {Generating deductive database explanations},
Pages        = {},
BookTitle    = {Proceedings of the International Conference on Logic
Programming},
Year         = 1999,
Editor       = {D. De Schreye},
Publisher    = {MIT Press},
Month        = {November},
Location     = {mireille: whole proceedings},
url          = {ftp://ftp.irisa.fr/local/lande/sm-md-iclp99.ps.gz},
keywords     = {deductive databases, debugging, trace, operational
semantics, multi-SLD-AL, meta-interpreter,
substitution set, instrumentation},
abstract     = {Existing explanation systems for deductive databases
show forests of proof trees. Although proof trees
are often useful, they are only one possible
interesting representation. We argue that an
explanation system for deductive databases must be
able to generate explanations at several levels of
abstraction. One possible and well known technique
to achieve this flexibility is to instrument
meta-interpreters. It is, however, not often used
because of its inefficiency. On the other hand,
deductive databases often generate intermediate
information stored in the physical database. This
information can be considered as a low-level trace
giving a faithful picture of what has happened at
the relational level. The deductive reasoning is
lost but can be very easily recovered by a
technique to generate explanations by integrating a
relational trace and an instrumented
meta-interpreter. The expensive aspects of
meta-interpretation are reduced by the use of the
trace which avoids many costly calculations. The
flexibility of meta-interpretation is preserved, as
illustrated by the generation of three different
kinds of explanations: a box-oriented trace, a
multi-SLD-AL tree and abstract AND trees. This
technique enables powerful explanation systems to be
implemented with very few modifications of the
deductive database mechanism itself.}
}


S. Mallet, M. Ducassé
"Myrtle: A set-oriented meta-interpreter driven by a relational'' trace for deductive database debugging", In LOgic-based Program Synthesis and TRansformation, pp. 328-330, P. Flener (Eds), Springer-Verlag, LNCS 1559, 1999, Abstract- Extended version in Research Report 3598.
@InProceedings{sm-lopstr98,
Author       = {S. Mallet and M. Ducass\'{e}},
Title        = {Myrtle: A set-oriented meta-interpreter driven by a
relational'' trace for deductive database
debugging},
Pages        = {328-330},
Editor       = {P. Flener},
Booktitle    = {{LO}gic-based Program Synthesis and TRansformation},
Year         = {1999},
Publisher    = {Springer-Verlag, LNCS 1559},
Note         = {Abstract- Extended version in Research Report 3598}
}


S. Mallet, M. Ducassé
"Myrtle: A set-oriented meta-interpreter driven by a relational'' trace for deductive database debugging", INRIA, technical report, nº RR-3598, January 1999, Research Report.

Keywords: deductive databases, debugging, trace, multi-SLD-AL, meta-interpreter, substitution set, instrumentation.
@TechReport{sm-rr99,
Author       = {S. Mallet and M. Ducass\'{e}},
Title        = {Myrtle: A set-oriented meta-interpreter driven by a
relational'' trace for deductive database
debugging},
Institution  = {INRIA},
Year         = 1999,
Type         = {Research Report},
Number       = {RR-3598},
Month        = {January},
url          = {http://www.inria.fr/RRRT/RR-3598.html},
keywords     = {deductive databases, debugging, trace, multi-SLD-AL,
meta-interpreter, substitution set, instrumentation},
abstract     = {Deductive databases manage large quantities of data
and, in general, in a set-oriented way. The existing
systems of explanation for deductive databases do
not take these constraints into account. We propose
a tracing technique which consists of integrating a
"relational" trace and an instrumented
meta-interpreter using substitution sets. The
relational trace efficiently gives precise
information about data extraction from the
relational database. The meta-interpreter manages
substitution sets and gives explanation on the
deduction. The expensive aspects of
meta-interpretation are reduced by the use of the
trace which avoids many calculations. The
flexibility of meta-interpretation is preserved. It
allows different profiles of trace to be easily
produced.}
}


"A systematic study of functional language implementations", ACM Transactions on Programming Languages and Systems, vol. 20, nº 2, pp. 344-387, 1998.
@Article{TOPLAS98,
author       = {R. Douence and P. Fradet},
url          = {http://www.acm.org/pubs/articles/journals/toplas/1998-20-2/p344-douence/p344-douence.pdf},
title        = {A systematic study of functional language implementations},
journal      = {{ACM} Transactions on Programming Languages and Systems},
year         = {1998},
volume       = {20},
number       = {2},
pages        = {344--387},
abstract     = {We introduce a unified framework to describe, relate, compare and classify functional lan
guage implementations. The compilation process is expressed as a succession of program transforma
tions in the common framework. At each step, different transformations model fundamental choices. A
benefit of this approach is to structure and decompose the implementation process. The correctness
proofs can be tackled independently for each step and amount to proving program transformations in
the functional world. This approach also paves the way to formal comparisons by making it possible to
estimate the complexity of individual transformations or compositions of them. Our study aims at cov
ering the whole known design space of sequential functional languages implementations. In particular,
we consider call-by-value, call-by-name and call-by-need reduction strategies as well as environment
and graph-based implementations. We describe for each compilation step the diverse alternatives as
program transformations. In some cases, we illustrate how to compare or relate compilation tech
niques, express global optimizations or hybrid implementations. We also provide a classification of
well-known abstract machines.}
}


M. Ducassé
"Coca: A Debugger for C Based on Fine Grained Control Flow and Data Events", INRIA, technical report, nº IRISA PI 1202 or INRIA RR-3489, septembre 1998.

Keywords: Software engineering, Programming environment, Automated debugging, Trace query mechanism, Debugging language, Program behavior understanding, C Debugging tool.
@TechReport{ducasse98,
Author       = {M. Ducass\'{e}},
url          = {http://www.irisa.fr/lande/ducasse/coca-irisa-98.ps.gz},
Title        = {Coca: A Debugger for {C} Based on Fine Grained
Control Flow and Data Events},
Year         = {1998},
Institution  = {INRIA},
Number       = {IRISA PI 1202 or INRIA RR-3489},
Month        = {septembre},
keywords     = {Software engineering, Programming environment,
Automated debugging, Trace query mechanism,
Debugging language, Program behavior understanding,
C Debugging tool},
abstract     = {We present Coca, an automated debugger for C, where
the breakpoint mechanism is based on events related
to language constructs. Events have semantics
whereas source lines used by most debuggers do not
have any. A trace is a sequence of events. It can be
seen as an ordered relation in a database. Users can
specify precisely which events they want to see by
specifying values for event attributes. At each
event, visible variables can be queried. The trace
query language is Prolog with a handful of
primitives. The trace query mechanism searches
through the execution traces using both control flow
and data whereas debuggers usually search according
to either control flow or data. As opposed to fully
relational'' debuggers which use plain database
querying mechanisms, Coca trace querying mechanism
does not require any storage. The analysis is done
on the fly, synchronously with the traced
execution. Coca is therefore more powerful than
source line'' debuggers and more efficient than
relational debuggers.}
}


M. Ducassé
"Teaching B at a Technical University is Possible and Rewarding", In B'98, Proceedings of the Educational Session, H. Habrias, S. E. Dunn (Eds), Association de Pilotage des Conférences B, Nantes, avril 1998, ISBN: 2-9512461-0-2.

Keywords: B formal method, teaching.
@InProceedings{ducasse98c,
Author       = {M. Ducass\'e},
url          = {http://www.irisa.fr/lande/ducasse/B-edu-98.ps.gz},
Title        = {Teaching B at a Technical University
is Possible and Rewarding},
BookTitle    = {B'98, Proceedings of the Educational Session},
Year         = {1998},
Editor       = {H. Habrias and S. E. Dunn},
Publisher    = {Association de Pilotage des Conférences B, Nantes},
Month        = {avril},
Note         = {ISBN: 2-9512461-0-2},
keywords     = {B formal method, teaching},
abstract     = {In January 1994, to replace a highly unpopular
denotational semantics course, I undertook to set up a course on the B
method at the INSA of Rennes (Institut National des Sciences
Appliquées), at a Bac+4 level.  I had almost no previous knowledge of
formal methods. I had, however, programmed much in Prolog and felt the
need for a strong programming discipline, supported if possible by
methods and tools.  The experience is, in my opinion, successful. The
students do learn much during the course, find interesting placements
where their competence is appreciated and every occurrence of the
course teaches me something.  In the article, I first list reasons to
start the experience. I then discuss the pedagogical objectives of the
course. The contents of the course is given and an assessment is
}


M. Ducassé, J. Noyé
"Tracing Prolog Programs by Source Instrumentation is Efficient Enough", In IJCSLP'98 Post-conference workshop on Implementation Technologies for Programming Languages based on Logic., K. Sagonas (Eds), June 1998.

Keywords: Debugging, tracing, source to source transformation, benchmarking, Prolog.
@InProceedings{ducasse98b,
Author       = {M. Ducass\'e and J. Noy\'e},
url          = {http://www.irisa.fr/lande/ducasse/ducasse-noye-manchester-98.ps.gz},
Title        = {Tracing {Prolog} Programs by Source Instrumentation is
Efficient Enough},
BookTitle    = {IJCSLP'98 Post-conference workshop on
Implementation Technologies for Programming Languages based on
Logic.},
Year         = {1998},
Editor       = {K. Sagonas},
Month        = {June},
keywords     = {Debugging, tracing, source to source transformation,
benchmarking, Prolog},
abstract     = {Tracing by automatic program source instrumentation
has major advantages over compiled code instrumentation: it is more
portable, it benefits from many compiler optimizations, it produces
traces in terms of the original program, and it can be tailored to
specific debugging needs.  The usual argument in favor of compiled
code instrumentation is its supposed efficiency.  We have compared the
performance of two operational low-level Prolog tracers with source
instrumentation. We have executed classical Prolog benchmark programs,
collecting trace information without displaying it. On average,
collecting trace information by program instrumentation is about as
fast as using a low-level tracer in one case, and only twice slower in
the other. This is a minor penalty to pay, compared to the advantages
of the approach.  To our knowledge, this is the first time that a
quantitative comparison of both approaches is made for any programming
language.  }
}


"Structured Gamma", Science of Computer Programming, vol. 31, nº 2-3, pp. 263-289, 1998.
@Article{SCP98,
author       = {P. Fradet and D. Le Métayer},
title        = {Structured Gamma},
journal      = {Science of Computer Programming},
year         = {1998},
volume       = {31},
number       = {2-3},
pages        = {263--289},
abstract     = {The Gamma language is based on the chemical reaction metaphor which
has a number of benefits with respect to parallelism and program derivation.
But the original definition of Gamma does not provide any facility
for data structuring or for specifying
particular control strategies.
We address this issue by introducing a notion of
{\em structured multiset} which is a set of addresses
satisfying specific relations.
The relations can be seen as a form  of
neighborhood between the molecules of the solution; they can be used
in the reaction condition of a program or transformed by the action.
A type is defined by a context-free graph grammar and
a structured multiset belongs to a type $T$ if its underlying set of addresses
satisfies the {invariant}
expressed by the grammar defining $T$. We
define a type checking algorithm that allows us to prove
mechanically that a program maintains its data structure invariant. We
illustrate the significance of the approach for
program refinement and we describe its application to coordination.}
}


"Towards a Generic Framework for Aspect-Oriented Programming", In Third AOP Workshop, ECOOP'98 Workshop Reader, LNCS, vol. 1543, pp. 394-397, Springer-Verlag, juillet 1998.

Keywords: aspect-oriented programming, program transformation, program analysis.
@InProceedings{AOP98,
author       = "P. Fradet and M. {S\"udholt}",
title        = "Towards a Generic Framework for Aspect-Oriented Programming",
booktitle    = "{\it Third AOP Workshop, ECOOP'98 Workshop Reader}",
keywords     = {aspect-oriented programming, program transformation, program analysis},
MONTH        = {juillet},
year         = "1998",
volume       = {1543},
series       = {LNCS},
publisher    = {Springer-Verlag},
pages        = {394--397},
abstract     = {What exactly are aspects? How to weave?
What are the join points used to anchor aspects into the component program? Is
there a general purpose aspect language?  In this position paper, we address
these questions for a particular class of aspects: aspects expressible as
static, source-to-source program transformations. An aspect is defined as a
collection of program transformations acting on the abstract syntax tree of
the component program.  We discuss the design of a generic framework to
express these transformations as well as a generic weaver. The coupling of

nent and aspect definitions can be defined formally using operators
matching subtrees of the component program. The aspect weaver is simply a
fixpoint operator taking as parameters the component program and a set of
program transformations.  In many cases, program transformations based solely
on syntactic criteria are not satisfactory and one would like to be able to
use semantic criteria in aspect definitions. We show how this can be done
using properties expressed on the semantics of the component program and
implemented using static analysis techniques. One of our main concerns is to
keep weaving predictable. This raises several questions about the semantics
(termination, convergence) of weaving.}
}


V. Gouranton
"Deriving analysers by folding/unfolding of natural semantics and a case study: slicing", In International Static Analysis Symposium, SAS'98, Lecture Notes in Computer Science, nº 1503, pp. 115-133, Springer-Verlag, Pise, Italie, September 1998.

Keywords: systematic derivation, program transformation, natural semantics, proof tree, slicing analysis, logic programming language..
@INPROCEEDINGS{Gouranton:98:DerivingSAS,
AUTHOR       = {V. Gouranton},
BOOKTITLE    = {International Static Analysis Symposium, SAS'98},
MONTH        = {September},
NUMBER       = {1503},
PAGES        = {115-133},
PUBLISHER    = {Springer-Verlag},
SERIES       = {Lecture Notes in Computer Science},
TITLE        = {Deriving analysers
by folding/unfolding of natural semantics  and a case
study: slicing},
YEAR         = {1998},
URL          = {ftp://ftp.irisa.fr/local/lande/vg-sas98.ps.gz},
KEYWORDS     = {systematic derivation, program transformation, natural
semantics, proof tree, slicing analysis, logic programming language.},
ABSTRACT     = {We consider specifications of analysers expressed as compositions of
two functions: a semantic function, which returns a natural semantics
derivation tree, and a property defined by recurrence on derivation
trees. A recursive definition of a dynamic analyser can be obtained by
fold/unfold program transformation combined with deforestation.  A
static analyser can  then be derived by abstract interpretation of the
dynamic analyser.  We apply our framework to the derivation of a
dynamic backward slicing analysis for a logic programming language.}
}


V. Gouranton
"Deriving analysers by folding/unfolding of natural semantics and a case study: slicing", INRIA, technical report, nº 3413, April 1998, Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE.

Keywords: systematic derivation, program transformation, natural semantics, proof tree, slicing analysis.
@TECHREPORT{Gouranton:98:DerivingAnalysers,
AUTHOR       = {V. Gouranton},
ADDRESS      = {Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE},
INSTITUTION  = {INRIA},
MONTH        = {April},
NUMBER       = {3413},
TITLE        = {Deriving analysers
by folding/unfolding of natural semantics  and a case
study: slicing},
YEAR         = {1998},
URL          = {ftp://ftp.inria.fr/INRIA/publication/RR/RR-3413.ps.gz},
KEYWORDS     = {systematic derivation, program transformation, natural
semantics, proof tree, slicing analysis},
ABSTRACT     = {We consider specifications of analysers expressed as compositions of
two functions: a semantic function, which returns a natural semantics
derivation tree, and a property defined by recurrence on derivation
trees. A recursive definition of a dynamic analyser can be obtained by
fold/unfold program transformation combined with deforestation. We
apply our framework to the derivation of a slicing analysis for a
logic programming language.}
}


V. Gouranton, D. Le Métayer
"Dynamic slicing: a generic analysis based on a natural semantics format", INRIA, technical report, nº 3375, March 1998, Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE.

Keywords: dynamic slicing analysis,natural semantics, proof tree, correctness, systematic derivation..
@TECHREPORT{Gouranton:98:DynamicSlicing,
AUTHOR       = {V. Gouranton and D. {Le M\'etayer}},
ADDRESS      = {Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE},
INSTITUTION  = {INRIA},
MONTH        = {March},
NUMBER       = {3375},
TITLE        = {Dynamic slicing: a generic analysis based on a natural semantics format},
YEAR         = {1998},
URL          = {ftp://ftp.inria.fr/INRIA/publication/RR/RR-3375.ps.gz},
KEYWORDS     = {dynamic slicing analysis,natural semantics, proof tree, correctness, systematic derivation.},
ABSTRACT     = {Slicing analyses have been proposed for different
programming languages. Rather than defining a new analysis from
scratch for each programming language, we would like to specify such
an analysis once for all, in a language-independent way, and then
specialise it for different programming languages. In order to achieve
this goal, we propose a notion of natural semantics format and a
dynamic slicing analysis format. The natural semantics format
formalises a class of natural semantics and the analysis format is a
generic, language-independent, slicing analysis. The correctness of
the generic analysis is established as a relation between the
derivation trees of the original program and the slice.  This generic
analysis is then instantiated to several programming languages
conforming the semantics format (an imperative language, a logic
programming language and a functional language), yielding a dynamic
slicing analyser for each of these languages.}
}


T. Jensen
"Inference of polymorphic and conditional strictness properties", In Proc. of 25th ACM Symposium on Principles of Programming Languages, pp. 209-221, ACM Press, 1998.
@InProceedings{Jensen:98:Inference,
author       = {T.~Jensen},
url          = {http://www.irisa.fr/lande/jensen/papers/popl98.ps},
title        = {Inference of polymorphic and conditional strictness properties},
booktitle    = {Proc. of 25th {ACM} Symposium on Principles of
Programming Languages},
year         = {1998},
pages        = {209--221},
publisher    = {ACM Press},
abstract     = {We define an inference system for modular strictness
analysis of functional programs by extending a
conjunctive strictness logic with polymorphic and
conditional properties. This extended set of
properties is used to define a syntax-directed,
polymorphic strictness analysis based on polymorphic
recursion whose soundness is established via a
translation from the polymorphic system into the
conjunctive system. From the polymorphic analysis,
an inference algorithm based on constraint
resolution is derived and shown complete for variant
of the polymorphic analysis. The algorithm deduces
at the same time a property and a set of hypotheses
on the free variables of an expression which makes
it suitable for analysis of program with module
structure.}
}


T. Jensen, D. Le Métayer, T. Thorn
"Security and Dynamic Class Loading in Java: A Formalisation", In Proceedings of the 1998 IEEE International Conference on Computer Languages, pp. 4-15, May 1998.
@InProceedings{tjdlmtt-iccl98,
author       = {T.~Jensen and  D.~{Le M\'etayer} and T.~Thorn},
url          = {ftp://ftp.irisa.fr/local/lande/tjdlmtt-iccl98.ps.gz},
booktitle    = {Proceedings of the 1998 IEEE International Conference
on Computer Languages},
year         = {1998},
pages        = {4--15},
published    = {New York:\ IEEE Computer Society},
month        = {May},
of classes in the Java Virtual Machine (JVM) and of the visibility
of members of the loaded classes. This specification is obtained
by identifying the part of the run-time state of the JVM that is
linking and verification of classes. The formalisation of
visibility includes an axiomatisation of the rules for membership
of a class under inheritance, and of accessibility of a member in
the presence of accessibility modifiers such as \texttt{private}
and \texttt{protected}.  The contribution of the formalisation is
twofold. First, it provides a clear and concise description of the
the informal definitions of the Java language and the JVM. Second,
it is sufficiently simple to allow calculations of the effects of
}


T. Jensen, D. Le Métayer, T. Thorn
"Verification of control flow based security policies", IRISA, technical report, nº 1210, 1998.

Keywords: security, verification, finite-state system, control flow, object orientation.
@TechReport{jensen:98:verification,
author       = {T.~Jensen and D.~{Le Métayer} and T.~Thorn},
institution  = {IRISA},
title        = {Verification of control flow based security policies},
number       = {1210},
year         = {1998},
url          = {ftp://ftp.irisa.fr/techreports/1998/PI-1210.ps.gz},
keywords     = {security, verification, finite-state system,
control flow, object orientation},
abstract     = {A fundamental problem in software-based security is
whether \textit{local} security checks inserted
into the code are sufficient to implement a given
\textit{global} security policy. We introduce a
formalism based on a two-level linear-time temporal
logic for specifying global security policies
pertaining to the control-flow of the program, and
illustrate its expressive power with a number of
existing policies. We define a minimalistic,
security-dedicated program model that only contains
procedure call and dynamic security checks and
propose an automatic method for verifying that an
implementation using local security constructs
satisfies a global security policy. For a given
formula in the temporal logic we prove that there
exists a bound on the size of the states that have
to be considered in order to assure the validity of
the formula: this reduces the problem to
finite-state model checking. Finally, we
instantiate the framework to the security
architecture proposed for Java (JDK~1.2).}
}


D. Le Métayer, V. -A. Nicolas, O. Ridoux
"Programs, Properties, and Data: Exploring the Software Development Trilogy", IEEE Software, vol. 15, nº 6, pp. 75-81, November/December 1998.

Keywords: Software engineering, testing, verification, program analysis, program learning.
@Article{dlm-van-or-IEEEsoftware98,
author       = {D. Le Métayer, V.-A. Nicolas and O. Ridoux},
title        = {Programs, Properties, and Data: Exploring the Software Development
Trilogy},
journal      = {IEEE Software},
pages        = {75-81},
volume       = {15},
number       = {6},
month        = {November/December},
year         = {1998},
url          = {http://www.irisa.fr/lande/vnicolas/articleIEEE98.pdf},
keywords     = {Software engineering, testing, verification, program analysis,
program learning},
abstract     = {Software development usually involves a collection of properties,
programs and data as input or output documents. Putting these three
kinds of documents at the vertices of a triangle, one sees that all
three sides of the triangle have been exploited in formal methods,
and that they have often been used in both directions. However,
richer combinations have seldom been envisaged, and formal methods
often amount to a strict orientation of the figure by imposing
functional dependencies (e.g.,~infering test cases from specifications).
Moreover, undecidability problems arise when properties are expressed
in full predicate logic (or similar formalisms) or programs are
written in Turing-equivalent programming languages. We advocate that
(1) formal methods should provide more flexible ways to exploit the
developer's knowledge and offer a variety of possibilities to
construct programs, properties and test data and (2) it is worth
restricting the power of logic formalisms and programming languages
for the benefit of mechanization. We go one step in this direction,
and present a formal method for generating test cases that combines
techniques from abstract interpretation ({\em program -> property})
and testing ({\em program+property -> test data}), and takes inspiration
from automated learning (test generation via a {\em testing bias}).
The crucial property of the test suites generated this way is that
they are robust with respect to a test objective formalized as a
property. In other words, if a program passes the test suite, then
it is guaranteed to satisfy the property. As this process leads to
decision problems in very restricted formalisms, it can be fully
mechanized.}
}


J. Mallet
"Symbolic Cost Analysis and Automatic Data Distribution for a Skeleton-based Language", In Euro-Par'98 Parallel Processing, Lecture Notes in Computer Science, nº 1470, pp. 688-697, Springer-Verlag, Southampton, UK, September 1998.
@InProceedings{Mallet98a,
author       = "J. Mallet",
title        = "Symbolic Cost Analysis and Automatic Data Distribution
for a Skeleton-based Language",
booktitle    = "Euro-Par{'}98 Parallel Processing",
pages        = "688--697",
year         = "1998",
PUBLISHER    = {Springer-Verlag},
number       = 1470,
MONTH        = {September},
SERIES       = {Lecture Notes in Computer Science},
}


J. Mallet
"Compilation of a skeleton-based parallel language through symbolic cost analysis and automatic distribution", INRIA, technical report, nº 3436, Mai 1998, Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE.

Keywords: skeleton-based language, parallelism, cost analysis, automatic data distribution.
@TECHREPORT{Mallet98b,
AUTHOR       = {J. Mallet},
ADDRESS      = {Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE},
INSTITUTION  = {INRIA},
MONTH        = {Mai},
NUMBER       = {3436},
TITLE        = {Compilation of a skeleton-based parallel language through symbolic cost analysis and automatic distribution},
YEAR         = {1998},
url          = {ftp://ftp.inria.fr/INRIA/publication/RR/RR-3436.ps.gz},
keywords     = {skeleton-based language, parallelism, cost analysis,
automatic data distribution},
abstract     = {We present a skeleton-based language which leads to portable
and cost-predictable implementations on MIMD computers.
The compilation process is described as a series
of program transformations. We focus in this paper on
the step concerning the distribution choice.
The problem of automatic mapping of input vectors
onto processors is addressed using symbolic cost evaluation.
Source language restrictions
are crucial since they permit to
use powerful techniques on polytope volume computations
to evaluate costs precisely.
The approach can be seen as a
cross-fertilization between techniques developed within
the FORTRAN parallelization and skeleton communities.}
}


J. Mallet
"Compilation d'un langage spécialisé pour machine massivement parallèle", Université de Rennes I, Ifsic, Irisa, PhD thesis, 1998.

Keywords: parallelism, compilation, specialized language, program skeleton, data distribution, program transformation, cost analysis.
@PHDTHESIS{Mallet98c,
author       = {J. Mallet},
school       = {Université de Rennes I, Ifsic, Irisa},
title        = {Compilation d'un langage spécialisé pour
machine massivement parallèle},
year         = {1998},
url          = {ftp://ftp.irisa.fr/techreports/theses/1998/mallet.ps.gz},
abstract     = {The programming languages used for the parallel computers are
either efficient languages with explicit parallelism but no portable and
very complex to use, either simple portable languages but their
compilation is complex and relatively inefficient.

We propose a specialized language based on program skeletons encapsulating
data and control flow for which an accurate cost analysis of the parallel
implementation exists. The compilation process deals with the
automatic choice of the data distributions on the processors through
the accurate cost guaranteed by the source language. This allows to
obtain an automatic compilation with an efficient parallel code
(the distributions representing a global choice of parallel
implementation).

The compilation process is described as a series of program
transformations, each transformation mapping one intermediate skeleton-based
language into another. The target language is an SPMD-like skeleton-based
language straightforwardly translating into a sequential language with
calls to communication library routines. The main compilation steps are :
the size analysis, the in-place updating transformation, the explicitation
of the communications and the data distributions choice.},
keywords     = {parallelism, compilation, specialized language,
program skeleton, data distribution,
program transformation, cost analysis}
}


S. Mallet, M. Ducassé
"Pilotage d'un méta-interprète ensembliste par une trace relationnelle'' pour le débogage de bases de données déductives", In Journées francophones de Programmation Logique et programmation par Contraintes, pp. 151-165, O. Ridoux (Eds), Hermes, Nantes, mai 1998, JFPLC'98.

Keywords: bases de données déductives, débogage, trace, sémantique opérationnelle, multi-SLD-AL, méta-interprète, ensembles de substitutions, instrumentation.
@InProceedings{mallet98a,
author       = "S. Mallet and M. Ducass\'{e}",
title        = "Pilotage d'un m\'{e}ta-interpr\{e}te ensembliste
par une trace relationnelle'' pour le d\'{e}bogage
de bases de donn\'{e}es d\'{e}ductives",
editor       = "O. Ridoux",
pages        = "151-165",
booktitle    = "Journ\'{e}es francophones de Programmation Logique
et programmation par Contraintes",
year         = "1998",
organization = "JFPLC'98",
publisher    = "Hermes",
month        = "mai",
Url          = {ftp://ftp.irisa.fr/local/lande/sm-md-jfplc98.ps.gz},
Keywords     = {bases de données déductives, débogage, trace,
sémantique opérationnelle, multi-SLD-AL,
méta-interprète, ensembles de substitutions,
instrumentation},
Abstract     = {Le développement des bases de données déductives
nécessite des outils, en particulier pour le
débogage. Les bases de données déductives gèrent des
quantités importantes de données et, en général, de
manière ensembliste. Les systèmes d'explication
existants pour les bases de données déductives ne
prennent pas en compte ces contraintes. Nous
proposons une technique de traçage qui consiste à
intégrer une trace relationnelle'' avec un
méta-interprète instrumenté utilisant des ensembles
de substitutions. La trace relationnelle donne, de
manière efficace, de l'information précise sur
l'extraction de données de la base relationnelle. Le
méta-interprète ensembliste gère des ensembles de
substitutions et donne des explications sur la
déduction. Les aspects coûteux de la
méta-interprétation sont réduits par l'utilisation
de la trace qui évite beaucoup de calculs. La
flexibilité de la méta-interprétation est
conservée. Elle permet de produire facilement des
traces de profils différents.}
}


V. -A. Nicolas
"Preuves de propriétés de classes de programmes par dérivation systématique de jeux de test", Université de Rennes I, Ifsic, Irisa, PhD thesis, December 1998.

Keywords: Software engineering, program verification, white-box testing, automated test data generation, program analysis, program schemes.
@PhDThesis{Nicolas-these98,
author       = {V.-A. Nicolas},
school       = {Université de Rennes I, Ifsic, Irisa},
title        = {Preuves de propriétés de classes de programmes par dérivation
systématique de jeux de test},
month        = {December},
year         = {1998},
url          = {ftp://ftp.irisa.fr/techreports/theses/1998/nicolas.ps.gz},
keywords     = {Software engineering, program verification, white-box testing,
automated test data generation, program analysis, program schemes},
abstract     = {The problem addressed in this thesis is the automatic generation of
test data sets enabling the proof of program properties. We thus
place ourselves half-way between the domain of software testing and
that of program verification. The work on software testing has lead
to easy-to-use semi-automatic tools, but which rely on hypothesis
difficult to verify in practice. In the domain of verification, some
tools based on formal methods have been developed but they often
require an user very experienced with the proof techniques used by
the tool. This fact is due to indecidability problems generated by
the Turing completeness of the formalisms treated. Our primary
contribution is a new approach to program verification, combining
the techniques of software testing and static analysis.

We propose a formal method for the generation of finite test data
sets, allowing one to prove that a program satisfies a given property.
This method uses the program source and the property, which must
belong to certain class of programs (or properties). These classes
are represented by hierarchies of program schemes, which can be seen
as modeling some test hypothesis. Every program belonging to one of
our schemes and passing the generated test data set satisfy the tested
property. For a given property our method is completely automatic and
thus does not require any special competence of the user. We have
implemented this method in a prototype (for a restricted functional
language), for properties expressible in terms of list length.}
}


"Shape types", In Proc. of Principles of Programming Languages, ACM Press, Paris, France, Jan. 1997.

Keywords: Pointer structures and manipulations, graph grammars, type checking, program robustness, C..
@InProceedings{popl97,
author       = {P. Fradet and D. Le M\'etayer},
title        = {Shape types},
booktitle    = {Proc. of Principles of Programming Languages},
year         = 1997,
url          = {ftp://ftp.irisa.fr/local/lande/pfdlm-popl97.ps.gz},
keywords     = {Pointer structures and manipulations, graph grammars, type
checking, program robustness, C.},
abstract     = {Type systems currently available for imperative languages
are too weak to detect a significant class of programming
errors. For example, they cannot express the property that
a list is doubly-linked or circular. We propose a solution
to this problem based on a notion of {\em shape types}
defined as context-free graph grammars. We define graphs in
set-theoretic terms, and graph modifications as multiset
rewrite rules. These rules can be checked statically to
ensure that they preserve the structure of the graph
specified by the grammar. We provide a syntax for a smooth
integration of shape types in C. The programmer can still
express pointer manipulations with the expected constant
time execution and benefits from the additional guarantee
that the property specified by the shape type is an
invariant of the program. },
publisher    = {ACM Press},
month        = {Jan.}
}


V. Gouranton
"Dérivation d'analyseurs dynamiques et statiques à partir de spécifications opérationnelles", Université de Rennes I, Ifsic, Irisa, PhD thesis, 1997.

Keywords: analyse dynamique, analyse statique, sémantique naturelle, transformation de programmes, interprétation abstraite, analyse de durée de vie, analyse de nécessité, élagage de programmes.
@PHDTHESIS{Gouranton:97:DerivationD,
author       = {V. Gouranton},
url          = {ftp://ftp.irisa.fr/local/lande/vgthese.ps.gz},
school       = {Université de Rennes I, Ifsic, Irisa},
title        = {Dérivation d'analyseurs dynamiques et
statiques à partir de spécifications
opérationnelles},
year         = {1997},
keywords     = {analyse dynamique, analyse statique, sémantique
naturelle, transformation de programmes,
interprétation abstraite, analyse de durée de vie,
analyse de nécessité, élagage de programmes},
abstract     = {Nous présentons un cadre de conception d'analyseurs
dynamiques et d'analyseurs statiques à partir de
spécifications opérationnelles. Nous proposons des
spécifications d'analyseurs en deux parties :
la sémantique du langage de programmation considéré
et la définition de la propriété recherchée.
Nous considérons le cadre de la sémantique naturelle
pour la définition du langage de programmation.
Ces sémantiques présentent un double avantage :
elles sont à la fois compositionnelles et
intentionnelles. Les propriétés sont définies comme
des fonctions sur les arbres de preuves de la
sémantique. Celles qui ne sont pas spécifiques
à un langage donné peuvent ainsi être définies une
fois pour toutes et spécialisées pour des langages
particuliers.

La composition de la sémantique et de la propriété
définit une analyse dynamique {\em a posteriori} ;
il s'agit en effet d'une fonction qui calcule dans
un premier temps la trace d'exécution (arbre de
preuve) complète d'un programme avant d'en extraire
la propriété recherchée. Des transformations de
programmes permettent d'obtenir une définition
récursive autonome de l'analyseur.
Cette fonction est en fait un analyseur
dynamique {\em à la volée} dans le sens où il
calcule la propriété recherchée au fur et à mesure
de l'exécution du programme. On réalise ensuite une
abstraction de cet analyseur pour obtenir un
analyseur statique : on applique pour ce faire la
théorie de l'interprétation abstraite.

Nous illustrons notre cadre à l'aide d'exemples
d'analyses spécifiques comme la durée de vie d'un
langage impératif et l'analyse de nécessité d'un
langage fonctionnel d'ordre supérieur. Nous montrons
aussi comment une analyse d'intérêt général,
en l'occurrence le filtrage de programmes, peut être
définie sur un format de sémantique, puis instanciée
à différents langages.}
}


V. Gouranton
"Un cadre générique de définition d'analyseurs dynamiques et statiques", In Journées du GDR Programmation, Rennes, November 1997.

Keywords: analyse dynamique, analyse statique, slicing, sémantique opérationnelle.
@InProceedings{vg_gdr97,
author       = {V. Gouranton},
url          = {ftp://ftp.irisa.fr/local/lande/vg_gdr97.ps.Z},
booktitle    = {Journ\'ees du GDR Programmation},
month        = {November},
title        = {Un cadre g\'en\'erique de d\'efinition d'analyseurs
dynamiques et statiques},
year         = {1997},
keywords     = {analyse dynamique, analyse statique, slicing,
sémantique opérationnelle},
abstract     = {Dans \cite{Gouranton:97:DerivationD}, nous
présentons un cadre de conception d'analyseurs
dynamiques et d'analyseurs statiques à partir
de spécifications opérationnelles.  Nous montrons
notamment comment une analyse d'intérêt général,
en l'occurrence le filtrage ({\em slicing}) de
programmes, peut être définie sur un format de
sémantique, puis instanciée à différents langages.
Dans cet article, nous rappelons les principaux
points du cadre générique de définition d'analyseurs
dynamiques et statiques présenté dans
\cite{Gouranton:97:DerivationD} et nous montrons
comment notre cadre est appliqué au cas du filtrage
d'un langage impératif.}
}


V. Gouranton, D. Le Métayer
"Formal development of static program analysers", In The 8th Israel Conference on Computer Systems and Sofware Engineering, IEEE, IFCC, ISySe'97, Herzliya, Israel, June 1997.

Keywords: functional languages, natural semantics, neededness analysis, paths analysis, program transformation, optimising compilers.
@InProceedings{vgdlm97,
author       = {V. Gouranton and D. {Le M\'etayer}},
url          = {ftp://ftp.irisa.fr/local/lande/vgdlm-ISySe97.ps.gz},
booktitle    = {The 8th Israel Conference on Computer Systems and Sofware
Engineering, IEEE, IFCC, ISySe'97},
month        = {June},
title        = {Formal development of static program analysers},
year         = {1997},
keywords     = {functional languages, natural semantics, neededness
analysis, paths analysis, program transformation,
optimising compilers},
abstract     = {We propose an approach for the formal development of
static analysers which is based on transformations of
inference systems. The specification of an analyser is made
of two components: an operational semantics of the
programming language and the definition of a property by
recurrence on the proof trees of the operational semantics.
The derivation is a succession of specialisations of
inference systems with respect to properties on their proof
trees. In this paper, we illustrate the methodology with
the derivation of analysers for a non strict functional
language. }
}


C. Hankin, D. Le Métayer, D. Sands
"Refining multiset transformers", Theoretical Computer Science, 1997.

Keywords: program transformation, composition operator, parallelism, Gamma, chemical reaction.
@Article{dlm-tcs97,
author       = {C. Hankin and  D. {Le M\'etayer} and D. Sands},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-tcs97.ps.gz},
title        = {Refining multiset transformers},
journal      = {Theoretical Computer Science},
year         = {1997},
keywords     = {program transformation, composition operator,
parallelism, Gamma, chemical reaction},
abstract     = {Gamma is a minimal language based
on local multiset rewriting with an
elegant chemical reaction metaphor.
In this paper, we study a notion of
refinement for Gamma  programs involving
parallel and sequential composition operators, and
derive a number of programming laws. The calculus
thus obtained is applied in the development of a generic
pipelining'' transformation, which enables certain
sequential compositions to be refined into parallel
compositions.}
}


A. A. Holzbacher, M. Périn, M. Südholt
"Modeling railway control systems using graph grammars: a case study", In 2nd International Conference on COORDINATION, LNCS 1282, Springer Verlag, September 1997, long version published as technical report, INRIA, no. 3210: see ftp://ftp.inria.fr/INRIA/publication/publi-ps-gz/RR/RR-3210.ps.gz.

Keywords: software architecture, graph grammar, software evolution.
@InProceedings{hps97,
author       = {A. A. Holzbacher and M. P{\'e}rin and M.
S{\"u}dholt},
homepage     = {;
http://www.irisa.fr/EXTERNE/projet/lande/mperin/mperin.html ;
http://www.emn.fr/dept_info/perso/sudholt/ },
url          = {ftp://ftp.irisa.fr/local/lande/coordination97.ps.gz},
booktitle    = {2nd International Conference on COORDINATION},
month        = {September},
title        = {Modeling railway control systems using graph grammars: a
case study},
year         = {1997},
publisher    = {Springer Verlag},
series       = {LNCS 1282},
keywords     = {software architecture, graph grammar, software
evolution},
note         = {long version published as technical report, INRIA, no. 3210:
see ftp://ftp.inria.fr/INRIA/publication/publi-ps-gz/RR/RR-3210.ps.gz},
abstract     = {In this paper we develop in three phases a railway control
system. We are mainly concerned with the \emph{software
architecture} of the control system and its \emph{dynamic
evolution}; we do not discuss here the implementation
details of the components forming the control system.
First, we informally discuss our design proposal for the
architecture of the control system: a hierarchy of
controllers whose leaves are local controllers connected in
a network that mimics the underlying railway topology.
Second, we formally define by means of particular
\emph{graph grammars} a style of software architectures for
the railway control system consisting of two complementary
\emph{views} and ensuring several desirable properties by
construction. The dynamic evolution of the architecture is
modelled by a set of \emph{coordination rules} which define
graph transformations and are \emph{verified} w.r.t. to the
graph grammar. Third, using a coordination rule as a formal
specification of a dynamic modification of the railway
control system, we derive its implementation in ConCoord, a
programming environment for concurrent coordinated
programming. With regard to software engineering, the two
first phases belong to the system design while the third
one forms the first implementation step.}
}


T. Jensen
"Disjunctive Program Analysis for Algebraic Data Types", ACM Transactions on Programming Languages and Systems, vol. 19, nº 5, pp. 752-804, 1997.
@Article{Jensen:97:Disjunctive,
author       = {T.~Jensen},
url          = {http://www.irisa.fr/lande/jensen/papers/toplas97.ps},
title        = {Disjunctive Program Analysis for Algebraic Data Types},
journal      = {{ACM} Transactions on Programming Languages and Systems},
year         = {1997},
volume       = {19},
number       = {5},
pages        = {752--804},
abstract     = {We describe how binding-time, data-flow, and
strictness analyses for languages with higher-order
functions and algebraic data types can be obtained
by instantiating a generic program logic and
axiomatization of the properties analyzed for. A
distinctive feature of the analyses is that
disjunctions of program properties are represented
exactly. This yields analyses of high precision and
provides a logical characterization of abstract
interpretations involving tensor products and
uniform properties of recursive data structures. An
effective method for proving properties of a program
based on fixed-point iteration is obtained by
grouping logically equivalent formulae of the same
type into equivalence classes, obtaining a lattice
of properties of that type, and then defining an
abstract interpretation over these lattices. We
demonstrate this in the case of strictness analysis
by proving that the strictness abstract
interpretation of a program is the equivalence class
containing the strongest property provable of the
program in the strictness logic.}
}


D. Le Métayer
"Program analysis for software engineering: new applications, new requirements, new tools", ACM Sigplan Notices, nº 1, pp. 86-88, January 1997.

Keywords: program analysis, debugging, testing, correctness, scalability, interaction.
@Article{dlm-sig97,
author       = {D. {Le M\'etayer}},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-sig97.ps.Z},
title        = {Program analysis for software engineering: new applications, new requirements, new tools},
journal      = {ACM Sigplan Notices},
year         = {1997},
keywords     = {program analysis, debugging, testing, correctness,
scalability, interaction},
number       = {1},
pages        = {86-88},
month        = {January},
abstract     = {In order to play a bigger role in
software engineering tools, static analysis techniques must
take into account the specific needs of this application
area, in particular in terms of interaction with the user
and scalability. This new perspective requires a
reexamination of the field of static program analysis
both internally and in connection with related
areas like theorem proving and debugging.}
}


S. Mallet, M. Ducassé
"DDB trees: a basis for deductive database explanations", In AADEBUG'97,Third International Workshop on Automated Debugging, pp. 87-102, Mariam Kamkar (Eds), Linköping, Sweden, May 1997.

Keywords: debugging, explanations, deductive databases, logic programming.
@InProceedings{mallet97a,
Author       = {S. Mallet and M. Ducass\'{e}},
Title        = {{DDB} trees: a basis for deductive database
explanations},
BookTitle    = {AADEBUG'97,Third International Workshop on Automated
Debugging},
Year         = 1997,
Editor       = {Mariam Kamkar},
Pages        = {87-102},
Month        = {May},
Location     = {mireille},
Url          = {http://www.ep.liu.se/ea/cis/1997/009/09/ },
Keywords     = {debugging, explanations, deductive databases, logic
programming},
Abstract     = {The power of deductive systems in general is that
programs express what should be done and not how it
should be done. Nevertheless, deductive systems need
debugging and explanation facilities. Indeed, their
operational semantics is less abstract than the
declarative semantics of the programs. If users have
to understand all the low level details of the
operational semantics much of the benefits of using
a deductive system is lost. Existing explanation
systems for deductive databases produce proof trees
to be shown to users. Although useful, proof trees
give a fragmented view of query evaluations, and
users face a, most of the time large, forest of
proof trees. We propose a new data structure, called
the DDB tree, which merges the information of a
proof tree forest into one concise tree. A DDB tree
gives a global picture of a query evaluation in a
dramatically reduced structure with no loss of
information. DDB trees can be shown to users or can
be analyzed further by an explanation system.}
}


S. Mallet, M. Ducassé
"An Informal Presentation of DDB Trees: A Basis for Deductive Database Explanations", In DDLP'97, Fifth International Workshop on Deductive Databases and Logic Programming, Ulrich Geske (Eds), GMD-Studien, July 1997.

Keywords: debugging, explanations, deductive databases, logic programming.
@InProceedings{mallet97b,
Author       = {S. Mallet and M. Ducass\'{e}},
Title        = {An {I}nformal {P}resentation of {DDB} {T}rees: {A}
{B}asis for {D}eductive {D}atabase {E}xplanations},
BookTitle    = {DDLP'97, Fifth International Workshop on Deductive
Databases and Logic Programming},
Year         = {1997},
Editor       = {Ulrich Geske},
Publisher    = {GMD-Studien},
Month        = {July},
Keywords     = {debugging, explanations, deductive databases, logic programming}
}


P. Pepper, M. Südholt
"Deriving Parallel Numerical Algorithms using Data Distribution Algebras: Wang's Algorithm", In Proc. of the 30rd Hawaii International Conference on System Sciences, IEEE, Jan 1997.

Keywords: functional, parallel programming, skeleton, data distribution algebra, partition algorithm.
@InProceedings{ps97a,
author       = {P. Pepper and M. S{\"u}dholt},
title        = {Deriving Parallel Numerical Algorithms using Data
Distribution Algebras: Wang's Algorithm},
booktitle    = {Proc. of the 30rd Hawaii International Conference on
System Sciences},
year         = 1997,
month        = {Jan},
publisher    = {IEEE},
url          = {http://uebb.cs.tu-berlin.de/papers/published/HICSS97.dvi.gz},
keywords     = {functional, parallel programming, skeleton,
data distribution algebra, partition algorithm},
abstract     = {Parallel and distributed programming are much more
difficult than the development of sequential algorithms due
to data distribution issues and communication requirements.
This paper presents a methodology that enables the abstract
description of the distribution of data structures by means
of overlapping covers that form \emph{data distribution
algebras}. Algorithms are formulated and derived by
transformation in a functional environment using
\emph{skeletons} i.e. higher-order functions with specific
parallel implementations. Communication is specified
covers. Such specifications enable the derivation of
explicit lower-level communication statements. We
illustrate the concepts by a complete derivation of Wang's
partition algorithm for the solution of tridiagonal systems
of linear equations. },
size         = 40
}


M. Südholt, C. Piepenbrock, K. Obermayer, P. Pepper
"Solving Large Systems of Differential Equations using Covers and Skeletons", In 50th Working Conference on Algorithmic Languages and Calculi, Chapman & Hall, Feb 1997.

Keywords: functional, parallel programming, numerical algorithm, program transformation, skeleton, data distribution algebra.
@InProceedings{spop97a,
author       = {M. S{\"u}dholt and C. Piepenbrock and K. Obermayer and P.
Pepper},
title        = {Solving Large Systems of Differential Equations using
Covers and Skeletons},
booktitle    = {50th Working Conference on Algorithmic Languages and
Calculi},
year         = {1997},
publisher    = {Chapman \& Hall},
month        = {Feb},
url          = {http://uebb.cs.tu-berlin.de/papers/published/WG21-0297.ps.gz},
keywords     = {functional, parallel programming, numerical algorithm,
program transformation, skeleton, data distribution algebra},
abstract     = {The design and implementation of parallel algorithms for
distributed memory architectures is much harder than the
development of sequential algorithms. This is mainly due to
the communication and synchronization that is necessary to
manage distributed data correctly. This paper applies a
methodology for the transformational derivation of parallel
programs using \emph{data distribution algebras} that
enable an abstract description of data distribution issues.
Algorithms are formulated using \emph{skeletons}, that is,
specialized higher-order functions with particular parallel
implementations. The methodology is applied to a the
solution of a \emph{system of ordinary differential
equations} where convolutions can be computed using the
Fast Fourier transformation. The example illustrates the
practical optimization problems for a development model of
the visual system that involves large scale neural network
simulations. Finally, this algorithm is compared to an
implementation of the same system of equations in the
programming language C* on a CM-5.},
size         = 150
}


T. Thorn
"Programming Languages for Mobile Code", ACM Computing Surveys, vol. 29, nº 3, pp. 213-239, September 1997.

Keywords: mobile code, distribution, network programming, portability, safety, security, object orientation, formal methods, Java, Obliq, Limbo, Safe-Tcl, Objective Caml, Telescript..
@Article{Thorn97,
author       = {T.~Thorn},
url          = {ftp://ftp.inria.fr/INRIA/publication/publi-ps-gz/RR/RR-3134.ps.gz},
title        = {Programming Languages for Mobile Code},
journal      = {ACM Computing Surveys},
year         = 1997,
volume       = 29,
number       = 3,
pages        = {213--239},
month        = {September},
keywords     = {mobile code, distribution, network programming,
portability,  safety, security, object orientation,
formal methods, Java, Obliq, Limbo, Safe-Tcl,
Objective Caml, Telescript.},
abstract     = {Sun's announcement of the programming language Java
more than anything popularized the notion of mobile
code, that is, programs travelling on a
heterogeneous network and automatically executes
upon arrival at the destination.  We describe
several classes of mobile code and we extract their
common characteristics, where security proves to be
one of the major concerns.  With these
characteristics as reference points, we examine six
representative languages proposed for mobile code.
The conclusion of this study leads to our
recommendations for future work, illustrated by
examples of ongoing research.}
}


L. Van Aertryck, M. Benveniste, D. Le Métayer
"CASTING: A formally based software test generation method", In The 1st International Conference on Formal Engineering Methods, IEEE, ICFEM'97, Hiroshima, Japan, November 1997.

Keywords: test cases, test suites, test generation, uniformity hypothesis, formal method, validation, fonctionnal testing, structural testing, attributed grammar, constraint solving.
@Conference{lvambdlm97,
author       = {L. {Van~Aertryck} and M. Benveniste and D. {Le M\'etayer}},
url          = {ftp://ftp.irisa.fr/local/lande/lvambdlm-Icfem97.ps.gz},
booktitle    = {The 1st International Conference on Formal Engineering Methods, IEEE, ICFEM'97},
month        = {November},
title        = {CASTING: A formally based software test generation method},
year         = {1997},
keywords     = {test cases, test suites, test generation,
uniformity hypothesis, formal method, validation,
fonctionnal testing, structural testing, attributed
grammar, constraint solving},
abstract     = {In this paper, we present CASTING, a Computer
Assisted Software Test engineering method.
The method, supported by a prototype tool, generates
realistic software test suites in a formal and
semi-automatic way. Based on a two-layer modular
architecture, CASTING is not tied to any particular
style of input. Initially designed to engineer functional
test suites from formal specifications, CASTING can
easily be instantiated to accept other formal
description levels, seamlessly supporting the main
industrial testing techniques, ranging from
functional testing to structural testing. A simple
process scheduler specification is used in this
paper to illustrate the interaction
facilities of CASTING and to provide a sample
of the resulting test suites.}
}


J. -M. Andréoli, C. Hankin, D. Le Métayer
"Coordination Programming: mechanisms, models and semantics", Word Scientific Publishing, IC Press, 1996.
@Book{ahm96,
author       = {J.-M. Andr\'eoli, C. Hankin, D. Le M\'etayer},
title        = {Coordination Programming: mechanisms, models and
semantics},
publisher    = {Word Scientific Publishing, IC Press},
year         = {1996},
abstract     = {This book focuses on a class of coordination models where
multiple pieces of software, possibly concerned with a wide
range of domains and not necessarily built to work
together, coordinate their activities through some shared
dataspace and engage in interactions which are
metaphorically analogous to "chemical" reactions. Exemplars
of this class are Gamma, LO, Linda, and Shared Prolog. They
have been extensively studied in the framework of the
Esprit project BRA-9102 "Coordination", in which all the
contributors of this book have been involved to some
extent}
}


J. -P. Banâtre, D. Le Métayer
"Gamma and the chemical reaction model: ten years after", In Coordination programming: mechanisms, models and semantics, IC Press, World Scientific Publishing, 1996.
@InProceedings{bm96b,
author       = {J.-P. Ban\^atre, D.Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-gamma10.ps.Z},
booktitle    = {Coordination programming: mechanisms, models and
semantics, IC Press},
title        = {Gamma and the chemical reaction model: ten years after},
abstract     = {Gamma was originally proposed in 1986 as a formalism for
the definition of programs without artificial
sequentiality. The basic idea underlying the formalism is
to describe computation as a form of chemical reaction on a
collection of individual pieces of data. Due to the very
minimal nature of the language, and its absence of
sequential bias, it has been possible to exploit this
initial paradigm in various directions. This paper reviews
most of the work done by various groups along these lines
and the current perspectives of our own research on Gamma.
For the sake of clarity, we separate the contributions in
three categories: (1) the relevance of the chemical
reaction model for software engineering, (2) extensions of
the original model and (3) implementation issues},
publisher    = {World Scientific Publishing},
year         = {1996}
}


G. Burn, D. Le Métayer
"Proving the correctness of compiler optimisations based on a global analysis: a study of strictness analysis", Journal of Functional Programming, vol. 6, nº 1, pp. 75-109, 1996.
@Article{bm96a,
author       = {G. Burn, D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-jfp95.ps.Z},
journal      = {Journal of Functional Programming},
pages        = {75-109},
title        = {Proving the correctness of compiler optimisations based on
a global analysis: a study of strictness analysis},
volume       = 6,
number       = 1,
year         = {1996},
abstract     = {A substantial amount of work has been devoted to the proof
of correctness of various program analyses but much less
attention has been paid to the correctness of compiler
optimisations based on these analyses. In this paper, we
tackle the problem in the context of strictness analysis
for lazy functional languages. We show that compiler
optimisations based on strictness analysis can be expressed
formally in the functional framework using continuations.
This formal presentation has two benefits: it allows us to
give a rigorous correctness proof of the optimised compiler
and it exposes the various optimisations made possible by a
strictness analysis}
}


"A taxonomy of functional language implementations. Part I: Call-by-Value", INRIA, technical report, nº 2783, Jan. 1996.

Keywords: Compilation, optimizations, program transformation, lambda-calculus, combinators..
@TechReport{taxo-1,
author       = {R. Douence and P. Fradet},
title        = {A taxonomy of functional language implementations. Part
{I}: Call-by-Value},
institution  = {INRIA},
year         = 1996,
url          = {ftp://ftp.irisa.fr/local/lande/rdpf-RR2783.ps.gz},
keywords     = {Compilation, optimizations, program transformation,
lambda-calculus, combinators.},
abstract     = {We present a unified framework to describe and compare
functional language implementations. We express the
compilation process as a succession of program
transformations in the common framework. At each step,
different transformations model fundamental choices or
optimizations. A benefit of this approach is to structure
and decompose the implementation process. The correctness
proofs can be tackled independently for each step and
amount to proving program transformations in the functional
world. It also paves the way to formal comparisons by
estimating the complexity of individual transformations or
compositions of them. We focus on call-by-value
implementations, describe and compare the diverse
alternatives and classify well-known abstract machines.
This work also aims to open the design space of functional
language implementations and we suggest how distinct
choices could be mixed to yield efficient hybrid abstract
machines.},
number       = 2783,
month        = {Jan.}
}


"A taxonomy of functional language implementations. Part II: Call-by-Name, Call-by-Need and Graph Reduction.", INRIA, technical report, nº 3050, Nov. 1996.

Keywords: Compilation, optimizations, program transformation, lambda-calculus, combinators..
@TechReport{taxo-2,
author       = {R. Douence and P. Fradet},
title        = {A taxonomy of functional language implementations. Part
{II}: Call-by-Name, Call-by-Need and Graph Reduction.},
institution  = {INRIA},
year         = 1996,
url          = {ftp://ftp.irisa.fr/local/lande/rdpf-RR3050.ps.gz},
keywords     = {Compilation, optimizations, program transformation,
lambda-calculus, combinators.},
abstract     = {In Part I , we proposed an approach to formally describe
and compare functional languages implementations. We
focused on call-by-value and described well-known compilers
for strict languages. Here, we complete our exploration of
the design space of implementations by studying
call-by-name, call-by-need and graph reduction. We express
the whole compilation process as a succession of program
transformations in a common framework. At each step,
different transformations model fundamental choices or
optimizations. We describe and compare the diverse
alternatives for the compilation of the call-by-name
strategy in both environment and graph-based models. The
different options for the compilation of beta-reduction
described in part I can be applied here as well. Instead,
we describe other possibilities specific to graph
reduction. Call-by-need is nothing but call-by-name with
redex sharing and update. We present how sharing can be
expressed in our framework and we describe different update
schemes. We finally classify some well-known call-by-need
implementations.},
number       = 3050,
month        = {Nov.}
}


"Décrire et comparer les implantations de langages fonctionnels", In Journées francophones des langages applicatifs, Collection INRIA Didactique, pp. 183-203, Val-Morin, Québec, Canada, Jan. 1996.

Keywords: Compilation, optimizations, program transformation, CAM, Krivine Machine, Tabac..
@InProceedings{jfla96,
author       = {R. Douence and P. Fradet},
title        = {D\'ecrire et comparer les implantations de langages
fonctionnels},
booktitle    = {Journ\'ees francophones des langages applicatifs},
year         = 1996,
url          = {ftp://ftp.irisa.fr/local/lande/rdpf-jfla96.ps.gz},
keywords     = {Compilation, optimizations, program transformation, CAM,
Krivine Machine, Tabac.},
abstract     = {Nous proposons un cadre formel pour d\'ecrire et comparer
les implantations de langages fonctionnels. Nous
d\'ecrivons le processus de compilation comme une suite de
transformations de programmes dans le cadre fonctionnel.
Les choix fondamentaux de mise en oeuvre ainsi que les
optimisations s'expriment naturellement comme des
transformations diff\'erentes. Les avantages de cette
approche sont de d\'ecomposer et de structurer la
compilation, de simplifier les preuves de correction et de
permettre des comparaisons formelles en \'etudiant chaque
transformation ou leur composition. Nous nous concentrons
ici sur l'appel par valeur et d\'ecrivons trois
implantations diff\'erentes: la Cam, Tabac et une version
stricte de la machine de Krivine.},
series       = {Collection INRIA Didactique},
month        = {Jan.},
pages        = {183-203}
}


M. Ducassé, J. Noyé
"Tracing Prolog without a tracer", In Proceedings of the poster session at JICSLP'96, pp. 223-232, N. Fuchs, U. Geske (Eds), GMD- Forschungszentrum Informationstechnik GMBH, GMD-STUDIEN Nr.296, ISBN3-88457-296-2, September 1996, One page abstract also appears in Proc. of the JICSLP'96, MIT Press, ISBN 0-262-63173-3.
@InProceedings{dn96,
author       = {M. Ducass\'{e} and J. Noy\'{e}},
title        = {Tracing {Prolog} without a tracer},
booktitle    = {Proceedings of the poster session at JICSLP'96},
year         = {1996},
editor       = {N. Fuchs and U. Geske},
pages        = {223-232},
publisher    = {GMD- Forschungszentrum Informationstechnik GMBH,
GMD-STUDIEN Nr.296, ISBN3-88457-296-2},
month        = {September},
note         = {One page abstract also appears in Proc. of the JICSLP'96,
MIT Press, ISBN 0-262-63173-3},
abstract     = {Tracing by automatic program source instrumentation has
major advantages over compiled code instrumentation: it is
cheaper to develop and more portable, it benefits from most
compiler optimizations, it produces traces in terms of the
original program, and it can be tailored to specific
debugging needs. The usual argument in favor of compiled
code instrumentation is its supposed efficiency. Tolmach
and Appel 1 designed and implemented a tracer for Standard
ML based on automatic program source instrumentation. The
resulting code runs only 3 times slower than optimized
code. They conjectured that a low-level tracer would run at
low-level tracer at hand to actually compare their results
with. We have performed such a comparison in the context of
Prolog, using the ECRC ECLiPSe environment. The built-in
low-level tracer of ECLiPSe is, at present, one of the most
interesting tracers for Prolog. We have compared it with an
instrumentation based on O'Keefe's "advice" utility 2 ,
made compatible with the ECLiPSe tracer. We traced
"standard" Prolog benchmark programs 3 with both tracing
techniques and measured the resulting CPU times. On average
the performances of both implementations are equivalent:
tracing Prolog programs by program instrumentation is no
slower than using a low-level tracer. To our knowledge,
this is the first time that a quantitative comparison of
both approaches is made. Another contribution is that our
source instrumentation is more complete than O'Keefe's
advice package. In particular, it deals with built-in
predicates, and allows predicates to be skipped/unskipped.

----------------------------------------

1 A. Tolmach and A.W. Appel. A debugger for Standard
ML. Journal of Functional Programming, 5(2):155-200,
April 1995.

2 The "advice" utility is part of the DEC10 Prolog
library, available by anonymous ftp from the AIAI of
the University of Edinburg (aiai.edinburgh.ac.uk).

3 P. Van Roy and A. M.  Despain. High-performance
logic programming with the Aquarius Prolog
compiler. Computer, 25(1):54-68, January 1992. }
}


P. Fradet, R. Gaugne, D. Le Métayer
"Static detection of pointer errors: an axiomatisation and a checking algorithm", In Proc. European Symposium on Programming, ESOP'96, LNCS, vol. 1058, pp. 125-140, Springer-Verlag, Linköping, Sweden, April 1996.

Keywords: Debugging tool, alias analysis, Hoare's logic..
@InProceedings{esop96,
author       = {P. Fradet and R. Gaugne and D. Le M\'etayer},
title        = {Static detection of pointer errors: an axiomatisation and
a checking algorithm},
booktitle    = {Proc. European Symposium on Programming, ESOP'96},
year         = 1996,
url          = {ftp://ftp.irisa.fr/local/lande/rgpfdlm-esop96.ps.gz},
keywords     = {Debugging tool, alias analysis, Hoare's logic.},
abstract     = {The incorrect use of pointers is one of the most common
source of bugs. As a consequence, any kind of static code
checking capable of detecting potential bugs at compile
time is welcome. This paper presents a static analysis for
the detection of incorrect accesses to memory (dereferences
of invalid pointers). A pointer may be invalid because it
has not been initialised or because it refers to a memory
location which has been deallocated. The analyser is
derived from an axiomatisation of alias and connectivity
properties which is shown to be sound with respect to the
natural semantics of the language. It deals with
dynamically allocated data structures and it is accurate
enough to handle circular structures. },
volume       = 1058,
series       = {LNCS},
publisher    = {Springer-Verlag},
month        = {April},
pages        = {125-140}
}


P. Fradet, R. Gaugne, D. Le Métayer
"An inference algorithm for the static verification of pointer manipulation", IRISA, technical report, nº 980 , 1996.

Keywords: Debugging tool, alias analysis, Hoare's logic..
@TechReport{pi980,
author       = {P. Fradet and R. Gaugne and D. Le M\'etayer},
title        = {An inference algorithm for the static verification of
pointer manipulation},
institution  = {IRISA},
year         = 1996,
url          = {ftp://ftp.irisa.fr/local/lande/pfrgdlm-PI980.ps.gz},
keywords     = {Debugging tool, alias analysis, Hoare's logic.},
abstract     = {The incorrect use of pointers is one of the most common
source of bugs. As a consequence, any kind of static code
checking capable of detecting potential bugs at compile
time is welcome. This paper presents a static analysis for
the detection of incorrect accesses to memory (dereferences
of invalid pointers). A pointer may be invalid because it
has not been initialised or because it refers to a memory
location which has been deallocated. The analyser is
derived from an axiomatisation of alias and connectivity
properties which is shown to be sound with respect to the
natural semantics of the language. It deals with
dynamically allocated data structures and it is accurate
enough to handle circular structures.},
number       = 980
}


"Type checking for a multiset rewriting language", In Proc. of the LOMAPS workshop on Analysis and Verification of Multiple-agent Languages, LNCS, vol. 1192, pp. 126-140, 1996.

Keywords: multiset rewriting, graph grammars, type checking, invariant, verification.
@InProceedings{fm96,
author       = {P. Fradet and D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/pfdlm-lomaps.ps.Z},
booktitle    = {Proc. of the LOMAPS workshop on Analysis and
Verification of Multiple-agent Languages},
pages        = {126-140},
title        = {Type checking for a multiset rewriting language},
series       = {LNCS},
volume       = {1192},
year         = {1996},
keywords     = {multiset rewriting, graph grammars, type checking,
invariant, verification},
abstract     = {We enhance Gamma, a multiset rewriting language, with a
notion of structured multiset. A structured multiset is a
set of addresses satisfying specific relations which can be
used in the rewriting rules of the program. A type is
defined by a context-free graph grammar and a structured
multiset belongs to a type T if its underlying set of
addresses satisfies the invariant expressed by the grammar
defining T. We define a type checking algorithm which
allows us to prove mechanically that a program maintains
its data structure invariant}
}


"Structured Gamma", IRISA, technical report, nº 989, 1996.

Keywords: multiset rewriting, type checking, invariant, verification, refinement.
@TechReport{fm96b,
author       = {P. Fradet, D. Le M\'etayer},
institution  = {IRISA},
title        = {Structured Gamma},
number       = {989},
year         = {1996},
url          = {ftp://ftp.irisa.fr/local/lande/pfdlm-PI989.ps.gz},
keywords     = {multiset rewriting, type checking, invariant,
verification, refinement},
abstract     = {The Gamma language is based on the chemical reaction
metaphor which has a number of benefits with respect to
parallelism and program derivation. But the original
definition of Gamma does not provide any facility for data
structuring or for specifying particular control
strategies. We address this issue by introducing a notion
of structured multiset which is a set of addresses
satisfying specific relations and associated with a value.
The relations can be seen as a form of neighbourhood
between the molecules of the solution; they can be used in
the reaction condition of the program or transformed by the
action. A type is defined by a context-free graph grammar
and a structured multiset belongs to a type T if its
underlying set of addresses satisfies the invariant
expressed by the grammar defining T. We define a type
checking algorithm which allows us to prove mechanically
that a program maintains its data structure invariant. We
illustrate the significance of the approach for program
reasoning and program refinement}
}


D. Le Métayer
"Software architecture styles as graph grammars", In In Proc of the ACM SIGSOFT Symposium of the foundations of Software Engineering, pp. 15-23, 1996.

Keywords: coordination, graph rewriting, software architecture, static verification.
@InProceedings{met96,
author       = {D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-acmfse96.ps.Z},
booktitle    = {In Proc of the ACM SIGSOFT Symposium of the foundations of
Software Engineering},
pages        = {15-23},
title        = {Software architecture styles as graph grammars},
year         = {1996},
keywords     = {coordination, graph rewriting, software architecture,
static verification},
abstract     = {We present a formalism for the definition of software
architectures in terms of graphs. Nodes represent the
individual agents and edges define their interconnection.
Individual agents can communicate only along the links
specified by the architecture. The dynamic evolution of the
overall architecture is defined independently by a
coordinator. An architecture style is a class of
architectures characterised by a graph grammar. The rules
of the coordinator are statically checked to ensure that
they preserve the constraints imposed by the architecture
style}
}


D. Le Métayer, D. Schmidt
"Structural Operational Semantics as a basis for static program analysis", In ACM Computing Surveys, pp. 340-343, 1996.
@InProceedings{dlm9,
author       = {D. Le M\'etayer, D. Schmidt},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-cs96.ps.Z},
booktitle    = {ACM Computing Surveys},
title        = {Structural Operational Semantics as a basis for static
program analysis},
year         = {1996},
pages        = {340-343},
abstract     = {In this paper, we argue that structural operational
semantics (SOS) is an appropriate style for
the definition of static analyses because it is both
structural and intensional. Another significant quality of
the SOS framework is that it can accomodate various classes
of languages, and thus forms an ideal basis for the design
of a program analysis platform}
}


P. Louvet, O. Ridoux
"Parametric Polymorphism for Typed Prolog and $\lambda$Prolog", In 8th Int. Symp. Programming Languages Implementation and Logic Programming, LNCS, vol. 1140, pp. 47-61, Aachen, Germany, 1996.

Keywords: Logic programming, typing, polymorphism, second-order lambda-calculus..
@InProceedings{louvet:parametric:plilp:96,
author       = {P. Louvet and O. Ridoux},
title        = {Parametric Polymorphism for {Typed Prolog} and
$\lambda${Prolog}},
booktitle    = {8th Int. Symp. Programming Languages Implementation and
Logic Programming},
series       = {LNCS},
volume       = 1140,
pages        = {47--61},
year         = 1996,
abstract     = {Typed Prolog and LambdaProlog are logic programming
languages with a strict typing discipline which is based on
simple types with variables. These variables are
interpreted as denoting generic polymorphism. Experiments
show that this discipline does not handle properly common
logic programming practices used in Prolog. For instance,
the usual transformation for computing the Clark completion
of a Prolog program does not work well with some typed
programs. We observe that the head-condition is at the
heart of these problems, and conclude that it should be
enforced. We propose a second-order scheme which is
compatible with usual practices. In this scheme, type
variables denote parametric polymorphism. It allows
quantifying types and terms, passing type and term
parameters to goals and terms, and to express type guards
for selecting goals. We give its syntax and deduction
rules, and propose a solution to keep the concrete notation
of programs close to the usual one.},
keywords     = {Logic programming, typing, polymorphism, second-order
lambda-calculus.},
url          = {ftp://ftp.irisa.fr/local/lande/plor-plilp96.ps.gz}
}


O. Ridoux
"Engineering Transformations of Attributed Grammars in $\lambda$Prolog", In Joint Int. Conf. and Symp. Logic Programming, pp. 244-258, M. Maher (Eds), MIT Press, 1996.

Keywords: Syntax-directed translation, grammar transformations, logic grammars, DCG, LambdaProlog..
@InProceedings{ridoux:engineering:jicslp:96,
author       = {O. Ridoux},
title        = {Engineering Transformations of Attributed Grammars in
$\lambda${Prolog}},
year         = 1996,
editor       = {M. Maher},
pages        = {244--258},
booktitle    = {Joint Int. Conf. and Symp. Logic Programming},
publisher    = {MIT Press},
abstract     = {An abstract representation for grammar rules that permits
an easy implementation of several attributed grammar
transformations is presented. It clearly separates the
actions that contribute to evaluating attribute values from
the circulation of these values, and it makes it easy to
combine the representations of several rules in order to
build the representation of new rules. This abstract form
applies well to such transforms as elimination of
left-recursion, elimination of empty derivation, unfolding
and factorization. Finally, the technique is applied to
DCGs and a LambdaProlog implementation of the abstract form
and of the transforms is described.},
keywords     = {Syntax-directed translation, grammar transformations,
logic grammars, DCG, LambdaProlog.},
url          = {ftp://ftp.irisa.fr/local/lande/or-jicslp96.ps.gz}
}


S. Schoenig, M. Ducassé
"Slicing pour programmes Prolog", In Actes des journées GDR programmation'96, Orléans, Université de Bordeaux I, Novembre 1996.
@InProceedings{sm96b,
author       = {S. Schoenig and M. Ducass{\'e}},
url          = {ftp://ftp.irisa.fr/local/lande/ssmd-gdr96.ps.Z},
title        = {Slicing pour programmes Prolog},
booktitle    = {Actes des journ{\'e}es GDR programmation'96, Orl\'{e}ans},
publisher    = {Universit{\'e} de Bordeaux I},
month        = {Novembre},
year         = {1996},
abstract     = {Le slicing est une technique d'analyse de programme
d\'evelopp\'ee \a l'origine par Weiser pour les langages
imp\'eratifs. Weiser a montr\'e que le slicing est un outil
naturel de d\'ebogage, mais il a \'egalement de nombreuses
autres applications (int\'egration de programmes,
optimisation, etc.) Dans cet article, nous proposons une
d\'efinition du slicing pour Prolog et un algorithme.
Celui-ci est au moins applicable \a Prolog pur \'etendu
par quelques pr\'edicats de base (=/2 et arithm\'etiques).
\A notre connaissance, cet algorithme est le premier \a
\^etre propos\'e pour Prolog. Les sp\'ecificit\'es de
Prolog (ind\'eterminisme et manque de flot de contr\^ole
explicite), ne permettent pas d'adapter trivialement les
algorithmes existants pour langages imp\'eratifs. }
}


S. Schoenig, M. Ducassé
"A Backward Slicing Algorithm for Prolog", In Static Analysis Symposium, pp. 317-331, R. Cousot, D. A. Schmidt (Eds), Springer-Verlag, LNCS 1145, Aachen, September 1996.
@InProceedings{sm96,
author       = {S. Schoenig and M. Ducass\'{e}},
url          = {ftp://ftp.irisa.fr/local/lande/ssmd-sas96.ps.gz},
title        = {A Backward Slicing Algorithm for Prolog},
booktitle    = {Static Analysis Symposium},
year         = {1996},
editor       = {R. Cousot and D.A. Schmidt},
pages        = {317-331},
publisher    = {Springer-Verlag, LNCS 1145},
month        = {September},
abstract     = {Slicing is a program analysis technique originally
developed by Weiser for imperative languages. Weiser showed
that slicing is a natural tool for debugging, but it has
other numerous applications (program integration, program
slicing algorithm for Prolog which produces executable
slices. The proposed algorithm is applicable at least to
pure Prolog extended by some simple built-in predicates
that handle the explicit unification =/2 and arithmetic. To
our knowledge, this algorithm is the first one to be
proposed for Prolog. Because of the indeterminism and lack
of explicit control flow of Prolog, existing algorithms
cannot be trivially adapted. The two main contributions of
this paper are a general definition of slicing adapted to
Prolog and a slicing algorithm that produces executable
programs. }
}


M. Ducassé (Eds)
@Proceedings{duc95b,
title        = {Proceedings of the 2nd International Workshop on Automated
and Algorithmic Debugging},
year         = {1995},
editor       = {M. Ducass\'{e}},
publisher    = {IRISA, Campus de Beaulieu, F-35042 Rennes cedex},
month        = {May}
}


J. -P. Banâtre, C. Bryce, D. Le Métayer
"An Approach to information security in distributed systems", In Proceedings of the 5th IEEE International Workshop on Future Trends in Distributed Computing Systems, pp. 384-394, 1995.
@InProceedings{bbm95,
author       = {J.-P. Ban\^atre, C. Bryce, D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-ftdcs95.ps.Z},
booktitle    = {Proceedings of the 5th IEEE International Workshop on
Future Trends in Distributed Computing Systems},
pages        = {384-394},
title        = {An Approach to information security in distributed
systems},
year         = {1995},
abstract     = {Information flow control mechanisms detect and prevent
illegal transfers of information within a computer system.
In this paper, we give an overview of a programming
language based approach to information flow control in a
system of communicating processes. The language chosen to
present the approach is CSP. We give the security semantics
of CSP and show, with the aid of examples, how the
semantics can be used to conduct both manual and automated
security proofs of application programs}
}


C. Belleannée, P. Brisset, O. Ridoux
"Une reconstruction pragmatique de $\lambda$Prolog", Technique et science informatiques, vol. 14, pp. 1131-1164, 1995.

Keywords: Programmation logique, LambdaProlog, lambda-calcul, quantifications, types. logic programming, LambdaProlog, lambda-calculus, quantifications, types..
@Article{belleannee:reconstruction:tsi:95,
author       = {C. Belleannée and P. Brisset and O. Ridoux},
title        = {Une reconstruction pragmatique de $\lambda${Prolog}},
journal      = {Technique et science informatiques},
year         = 1995,
volume       = 14,
numero       = 9,
pages        = {1131--1164},
abstract     = {LambdaProlog est un langage de programmation logique dont
les clauses et les termes généralisent ceux de Prolog. On
peut se demander si toutes ces extensions sont nécessaires
simultanément et si des langages intermédiaires
intéressants ne pourraient pas être définis, au moins dans
un but pédagogique. Nous répondons à cette question en
montrant que des liens de nécessité conduisent à adopter
toutes les extensions à partir de l'introduction du nouveau
domaine de termes. De cette reconstruction découle une
heuristique de programmation par induction sur les types
qui est un guide commode pour utiliser LambdaProlog.
LambdaProlog is a logic programming language in which
clauses and terms are more general than in Prolog. One may
wonder whether these extensions are simultaneously needed,
and what are the useful subsets of LambdaProlog, at least
for pedagogical purposes. We answer this question by
term domain to the extension of the formula language. A
handy heuristic for programming by induction on types can
keywords     = {Programmation logique, LambdaProlog, lambda-calcul,
quantifications, types. logic programming, LambdaProlog,
lambda-calculus, quantifications, types.},
url          = {ftp://ftp.irisa.fr/local/lande/cbpbor-tsi95.ps.Z}
}


S. Coupet-Grimal, O. Ridoux
"On the use of Advanced Logic Programming Languages in Computational Linguistics", J. Logic Programming, vol. 24, nº 1&2, pp. 121-159, 1995.

Keywords: Logic programming, computational linguistics, LambdaProlog, Prolog II, lambda-terms, rational terms..
@Article{coupet:use:jlp:95,
author       = {S. {Coupet-Grimal} and O. Ridoux},
title        = {On the use of Advanced Logic Programming Languages in
Computational Linguistics},
journal      = {J.~Logic Programming},
volume       = 24,
number       = {1\&2},
pages        = {121--159},
year         = 1995,
abstract     = {Computational Linguistics and Logic Programming have
strong connections, but the former uses concepts that are
absent from the most familiar implementations of the
latter. We advocate that a Logic Programming language need
not feature the Computational Linguistics concepts exactly,
it must only provide a logical way of dealing with them. We
focus on the manipulation of higher-order terms and the
logical handling of context, and we show that the advanced
features of Prolog~II and LambdaProlog are useful for
dealing with these concepts. Higher-order terms are native
in LambdaProlog, and Prolog~II's infinite trees provide a
handy data-structure for manipulating them. The formula
language of LambdaProlog can be transposed in the Logic
Grammar realm to allow for a logical handling of
context.},
keywords     = {Logic programming, computational linguistics,
LambdaProlog, Prolog~II, lambda-terms, rational terms.},
url          = {ftp://ftp.irisa.fr/local/lande/scor-jlp95.ps.Z}
}


"Towards a Taxonomy of Functional Language Implementations", In Proc. of 7th Int. Symp. on Programming Languages: Implementations, Logics and Programs, LNCS, vol. 982, pp. 34-45, Springer-Verlag, Utrecht, the Netherlands, 1995.

Keywords: Compilation, optimizations, program transformation, lambda-calculus, combinators..
@InProceedings{plilp95,
author       = {R. Douence and P. Fradet},
title        = {Towards a Taxonomy of Functional Language
Implementations},
booktitle    = {Proc. of 7th Int. Symp. on Programming Languages:
Implementations, Logics and Programs},
year         = 1995,
url          = {ftp://ftp.irisa.fr/local/lande/taxo015.ps.Z},
keywords     = {Compilation, optimizations, program transformation,
lambda-calculus, combinators.},
abstract     = {We express implementations of functional languages as a
succession of program transformations in a common
framework. At each step, different transformations model
fundamental choices or optimizations. A benefit of this
approach is to structure and decompose the implementation
process. The correctness proofs can be tackled
independently for each step and amount to proving program
transformations in the functional world. It also paves the
way to formal comparisons by estimating the complexity of
individual transformations or compositions of them. We
focus on call-by-value implementations, describe and
compare the diverse alternatives and classify well-known
abstract machines. This work also aims to open the design
space of functional language implementations and we suggest
how distinct choices could be mixed to yield efficient
hybrid abstract machines.},
volume       = 982,
series       = {LNCS},
publisher    = {Springer-Verlag},
pages        = {34-45}
}


M. Ducassé
"Automated Debugging Extensions of the Opium Trace Analyser", In Proceedings of the 2nd International Workshop on Automated and Algorithmic Debugging, IRISA, Campus de Beaulieu, F-35042 Rennes cedex, Saint Malo, France, May 1995.
@InProceedings{duc95,
author       = {M. Ducass\'e},
title        = {Automated Debugging Extensions of the {Opium} Trace
Analyser},
booktitle    = {Proceedings of the 2nd International Workshop on Automated
and Algorithmic Debugging},
year         = {1995},
publisher    = {IRISA, Campus de Beaulieu, F-35042 Rennes cedex},
month        = {May},
abstract     = {Traces of program executions tell how programs behave in
given cases. They are a helpful source of information for
automated debugging. Opium is an automated trace analyser
for Prolog programs. It is programmable and extendable. It
provides a trace query language and abstract views of
executions as a basis for automated debugging. Opium has
shown its capabilities to build abstract tracers and
automated debugging facilities. This paper lists the
extensions written so far, and describes two recent
extensions: the abstract tracers for the LO (Linear
Objects) language and for the CHR (Constraint Handling
Rules) language. }
}


V. Gouranton
"Une analyse de globalisation basée sur une semantique naturelle", Journées du GRD Programmation, technical report, November 1995, Grenoble.

Keywords: langages fonctionnels, semantique operationnelle, analyse de globalisation, optimisation de compilateurs.
@TechReport{gouranton:95:uneanalyse,
author       = {V. Gouranton},
institution  = {Journ{\'e}es du GRD Programmation},
month        = {November},
title        = {Une analyse de globalisation bas\'ee sur une semantique
naturelle},
year         = {1995},
url          = {ftp://ftp.irisa.fr/local/lande/vg_gdr95.ps.Z},
keywords     = {langages fonctionnels, semantique operationnelle, analyse
de globalisation, optimisation de compilateurs},
abstract     = {Nous d\'efinissons une analyse de globalisation pour un
langage fonctionnel strict par des raffinements successifs
de la s\'emantique naturelle du langage. Nous obtenons une
s\'emantique instrument\'ee de traces puis un syst\eme
d'inf\'erence g\'en\'erique montr\'e correct par rapport
\'a cette s\'emantique. Il faudra alors sp\'ecialiser ce
sys\eme d'inf\'erence pour la propri\'et\'e consid\'er\'ee
d\'efinie par r\'ecurrence sur les traces.}
}


V. Gouranton, D. Le Métayer
"Derivation of static analysers of functional programs from path properties of a natural semantics", Inria, technical report, nº 2607, 1995.

Keywords: functional languages, operational semantics, neededness analysis, paths analysis, optimising compilers.
@TechReport{vm95,
author       = {V. Gouranton and D. Le M\'etayer},
institution  = {Inria},
title        = {Derivation of static analysers of functional programs from
path properties of a natural semantics},
number       = {2607},
year         = {1995},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-rr2607.ps.Z},
keywords     = {functional languages, operational semantics, neededness
analysis, paths analysis, optimising compilers},
abstract     = {We advocate the use of operational semantics as a basis
for specifying program analyses for functional languages.
We put forward a methodology for defining a static analysis
by successive refinements of the natural semantics of the
language. We use paths as the abstract representation of
proof trees and we provide a language for defining
properties in terms of paths (neededness, absence,
uniqueness,...) and the mechanical derivation of the
corresponding analyses}
}


C. L. Hankin, D. Le Métayer
"Lazy types and program analysis", Science of Computer Programming, 1995.
@Article{hm95,
author       = {C.L. Hankin, D. Le M\'etayer},
journal      = {Science of Computer Programming},
title        = {Lazy types and program analysis},
year         = {1995},
abstract     = {Approaches to static analysis based on non-standard type
systems have received considerable interest recently. Most
work has concentrated on the relationship between such
analyses and abstract interpretation. In this paper, we
focus on the problem of producing efficient algorithms from
such type-based analyses. The key idea is the notion of
lazy types. We present the basic notions in the context of
a higher-order strictness analysis of list-processing
functions. We also sketch some recent work concerning a
general framework for program analysis based on these
ideas. We conclude with some experimental results}
}


D. Le Métayer
"Proving properties of programs defined over recursive data structures", In ACM Symposium on Partial Evaluation and Semantics-Based Program Manipulation, La Jolla, California, pp. 88-99, ACM PRESS, 1995.
@InProceedings{met95,
author       = {D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-pepm95.ps.Z},
booktitle    = {ACM Symposium on Partial Evaluation and Semantics-Based
Program Manipulation, La Jolla, California},
pages        = {88-99},
title        = {Proving properties of programs defined over recursive data
structures},
publisher    = {ACM PRESS},
year         = {1995},
abstract     = {We present a technique for the mechanical proof of
correctness properties of programs. We define a language of
properties over recursive data structures and an inference
system to assign properties to programs. In order to be
able to design a tractable inference algorithm, we impose
restrictions on both the programming language and the
language of properties. We show that these restrictions do
not preclude the treatment of interesting programs and
properties. As an example, our analyser is able to prove
automatically that a sort program returns a list of
non-increasing values}
}


O. Ridoux
"Imagining CLP$(\Lambda,\equiv\alpha\beta)$", In Constraint Programming: Basics and Trends. Selected papers of the 22nd Spring School in Theoretical Computer Science. LNCS 910, pp. 209-230, A. Podelski (Eds), Châtillon/Seine, France, 1995.

Keywords: CLP, LambdaProlog, lambda-calculus..
@InProceedings{ridoux:imagining:sctcs:95,
author       = {O. Ridoux},
title        = {Imagining {CLP}$(\Lambda,\equiv_{\alpha\beta})$},
editor       = {A. Podelski},
year         = 1995,
pages        = {209--230},
booktitle    = {Constraint Programming: Basics and Trends. Selected papers
of the 22nd Spring School in Theoretical Computer Science.
LNCS 910},
abstract     = {We study under which conditions the domain of
lambda-terms and the equality theory of the lambda-calculus
form the basis of a usable constraint logic programming
language (CLP). The conditions are that the equality theory
must contain axiom {$\eta$}, and the formula language must
depart from Horn clauses and accept universal
quantifications and implications in goals. In short,
CLP-lambda must be close to LambdaProlog.},
keywords     = {CLP, LambdaProlog, lambda-calculus.},
url          = {ftp://ftp.irisa.fr/local/lande/or-spring95.ps.Z}
}


J. -P. Banâtre, C. Bryce, D. Le Métayer
"Compile-time detection of information flow in sequential programs", In Proc. European Symposium on Research in Computer Security, LNCS, vol. 875, Springer Verlag, 1994.

Keywords: formal verification, program analysis, verification tools, computer security, information flow.
@InProceedings{bbm94,
author       = {J.-P. Ban\^atre, C. Bryce, D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-esorics94.ps.Z},
booktitle    = {Proc. European Symposium on Research in Computer
Security},
publisher    = {Springer Verlag},
series       = {LNCS},
volume       = 875,
title        = {Compile-time detection of information flow in sequential
programs},
year         = {1994},
keywords     = {formal verification, program analysis, verification tools,
computer security, information flow},
abstract     = {We give a formal definition of the notion of information
flow for a simple guarded command language. We propose an
axiomatisation of security properties based on this notion
of information flow and we prove its soundness with respect
to the operational semantics of the language. We then
identify the sources of non determinism in proofs and we
derive in successive steps an inference algorithm which is
both sound and complete with respect to the inference
system}
}


P. Brisset, O. Ridoux
"The Architecture of an Implementation of $\lambda$Prolog: Prolog/Mali", In ILPS'94 Workshop on Implementation Techniques for Logic Programming Languages, 1994, Extended version of [bo92b].

Keywords: See [bo92b] .
@InProceedings{brisset:architecture:ilpsw:94,
author       = {P. Brisset and O. Ridoux},
title        = {The Architecture of an Implementation of
$\lambda${Prolog}: {Prolog/Mali}},
booktitle    = {ILPS'94 Workshop on Implementation Techniques for Logic
Programming Languages},
year         = 1994,
note         = {Extended version of \cite{bo92b}},
abstract     = {See \cite{bo92b}},
keywords     = {See \cite{bo92b}},
url          = {ftp://ftp.irisa.fr/local/lande/pbor-w-ilps94.ps.Z}
}


M. Ducassé, J. Noyé
"Logic Programming Environments: Dynamic program analysis and debugging", The Journal of Logic Programming, vol. 19/20, pp. 351-384 , May/July 1994.
@Article{dn94,
author       = {M. Ducass\'{e} and J. Noy\'{e}},
url          = {http://www.irisa.fr/EXTERNE/bibli/pi/pi910.html},
title        = {Logic Programming Environments: Dynamic program analysis
and debugging},
journal      = {The Journal of Logic Programming},
year         = {1994},
volume       = {19/20},
pages        = {351-384 },
month        = {May/July},
notes        = {Anniversary issue: Ten years of Logic Programming},
abstract     = {Programming environments are essential for the acceptance
of programming languages. This survey emphasizes that
program analysis, both static and dynamic, is the central
issue of programming environments. Because their clean
semantics makes powerful analysis possible, logic
programming languages have an indisputable asset in the
long term. This survey is focused on logic program analysis
and debugging. The large number of references provided show
that the field, though maybe scattered, is active. A
unifying framework is given which separates environment
tools into extraction, analysis, and visualization. It
facilitates the analysis of existing tools and should give
some guide lines to develop new ones. Achievements in logic
programming are listed; some techniques developed for other
languages are pointed out, and some trends for further
research are drawn. Among the main achievements are
algorithmic debugging, tracing for sequential Prolog, and
abstract interpretation. The main missing techniques are
slicing, test case generation, and program mutation. The
perspectives we see are integration, evaluation and, above
all, automated static and dynamic analysis. }
}


"Collecting More Garbage", In Proc. of ACM Conf. on Lisp and Functional Programming, pp. 24-33, ACM Press, Orlando, FL, USA, June 1994.

Keywords: Garbage collection, space leaks, typing, parametricity..
@InProceedings{lisp94,
title        = {Collecting More Garbage},
booktitle    = {Proc. of ACM Conf. on Lisp and Functional Programming},
year         = 1994,
url          = {ftp://ftp.irisa.fr/local/lande/pf-lisp94.ps.Z},
keywords     = {Garbage collection, space leaks, typing, parametricity.},
abstract     = {We present a method, adapted to polymorphically typed
functional languages, to detect and collect more garbage
than existing GCs. It can be applied to strict or lazy
higher order languages and to several garbage collection
schemes. Our GC exploits the information on utility of
arguments provided by polymorphic types of functions. It is
able to detect garbage that is still referenced from the
stack and may collect useless parts of otherwise useful
data structures. We show how to partially collect shared
data structures and to extend the type system to infer more
precise information. We also present how this technique can
plug several common forms of space leaks.},
publisher    = {ACM Press},
month        = {June},
pages        = {24-33}
}


"Compilation of Head and Strong Reduction", In Proc. of the 5th European Symposium on Programming, LNCS, vol. 788, pp. 211-224, Springer-Verlag, Edinburgh, UK, April 1994.

Keywords: lambda-calculus, continuations, strong reduction, head reduction, compilation..
@InProceedings{esop94,
title        = {Compilation of Head and Strong Reduction},
booktitle    = {Proc. of the 5th European Symposium on Programming},
year         = 1994,
url          = {ftp://ftp.irisa.fr/local/lande/pf-esop94.ps.Z},
keywords     = {lambda-calculus, continuations, strong reduction, head
reduction, compilation.},
abstract     = {Functional language compilers implement only weak-head
reduction. However, there are cases where head normal forms
or full normal forms are needed. Here, we study how to use
cps conversion for the compilation of head and strong
reductions. We apply cps expressions to a special
continuation so that their head or strong normal form can
be obtained by the usual weak-head reduction. We remain
within the functional framework and no special abstract
machine is needed. Used as a preliminary step our method
allows a standard compiler to evaluate under lambda's.},
volume       = 788,
series       = {LNCS},
publisher    = {Springer-Verlag},
month        = {April},
pages        = {211-224}
}


C. L. Hankin, D. Le Métayer
"Deriving algorithms from type inference systems: Application to strictness analysis", In Proc. ACM Symposium on Principles of Programming Languages, pp. 202-212, 1994.
@InProceedings{hm94,
author       = {C.L. Hankin, D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-popl94.ps.Z},
booktitle    = {Proc. ACM Symposium on Principles of Programming
Languages},
pages        = {202-212},
title        = {Deriving algorithms from type inference systems:
Application to strictness analysis},
year         = {1994},
abstract     = {The role of non-standard type inference in static program
analysis has been much studied recently. Early work
emphasised the efficiency of type inference algorithms and
paid little attention to the correctness of the inference
system. Recently more powerful inference systems have been
investigated but the connection with efficient inference
algorithms has been obscured. The contribution of this
paper is twofold: first we show how to transform a program
logic into an algorithm and second, we introduce the notion
of lazy types and show how to derive an efficient algorithm
for strictness analysis}
}


C. L. Hankin, D. Le Métayer
"Lazy type inference for the strictness analysis of lists", In Proc. 5th European Symposium on Programming, Springer Verlag, vol. LNCS, nº 788, pp. 211-224, 1994.
@InProceedings{hm94b,
author       = {C.L. Hankin and D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-esop94.ps.Z},
booktitle    = {Proc. 5th European Symposium on Programming, Springer
Verlag},
number       = {788},
pages        = {211-224},
title        = {Lazy type inference for the strictness analysis of
lists},
volume       = {LNCS},
year         = {1994},
abstract     = {We present a type inference system for the strictness
analysis of lists and we show that it can be used as the
basis for an efficient algorithm. The algorithm is as
accurate as the usual abstract interpretation technique.
One distinctive advantage of this approach is that it is
not necessary to impose an abstract domain of a particular
depth prior to the analysis: the lazy type algorithm will
instead explore the part of a potentially infinite domain
required to prove the strictness property}
}


C. L. Hankin, D. Le Métayer
"A type-based framework for program analysis", In Proc. Static Analysis Symposium, Springer Verlag, LNCS, vol. 864, pp. 380-394, 1994.
@InProceedings{hm94c,
author       = {C.L. Hankin and D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-sas94.ps.Z},
booktitle    = {Proc. Static Analysis Symposium, Springer Verlag},
volume       = {864},
pages        = {380-394},
title        = {A type-based framework for program analysis},
series       = {LNCS},
year         = {1994},
abstract     = {In this paper, we present a general framework for
type-based analyses of functional programs. Our framework
is parameterised by a set of types to represent properties
and interpretations for constants in the language. To
construct a new analysis the user needs only to supply a
model for the types (which properties they denote) and
sound rules for the constants. We identify the local
properties that must be proven to guarantee the correctness
of a specific analysis and algorithm. We illustrate the
approach by recasting Hunt and Sand's binding time analysis
in our framework. Furthermore, we report on experimental
results suggesting that our generic inference algorithm can
provide the basis for an efficient program analyser}
}


D. Le Métayer
"Higher-order multiset programming", In Proc. DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 18, American Mathematical Society, 1994.
@InProceedings{met94,
author       = {D. Le M\'etayer},
url          = {ftp://ftp.irisa.fr/local/lande/dlm-dimacs94.ps.Z},
booktitle    = {Proc. DIMACS Series in Discrete Mathematics and
Theoretical Computer Science},
title        = {Higher-order multiset programming},
volume       = {18},
publisher    = {American Mathematical Society},
year         = {1994},
abstract     = {Gamma is a kernel language in which programs are described
in terms of multiset transformations following the chemical
reaction metaphor. Previous work has demonstrated its
significance for the construction of massively parallel
programs but also highlighted some limitations. We
introduce a higher-order extension of this formalism
unifying the program and data syntactic categories. As a
consequence, active programs can be inserted into
multisets. This generalisation has far reaching effects and
we show its relevance from both the formal and the
practical point of view. In particular, we present a
program defining the chemical abstract machine in
higher-order Gamma. We show that more sophisticated control
strategies can be expressed within the language and we put
them into practice by deriving a scan vector model program
suitable for execution on fine-grained parallel machines}
}


P. Brisset, O. Ridoux
"Continuations in $\lambda$Prolog", In 10th Int. Conf. Logic Programming, pp. 27-43, D. S. Warren (Eds), MIT Press, 1993.

Keywords: LambdaProlog, compilation, continuation, exception handling..
@InProceedings{brisset:continuations:iclp:93,
author       = {P. Brisset and O. Ridoux},
title        = {Continuations in $\lambda${Prolog}},
booktitle    = {10th Int. Conf. Logic Programming},
comment      = {Budapest, Hungary},
editor       = {D.S. Warren},
pages        = {27--43},
publisher    = {MIT Press},
year         = 1993,
abstract     = {Continuations are well know in functional programming
where they have been used to transform and compile
programs. Some languages provide explicit manipulations of
the continuation for the user: The user can catch and
modify the current continuation. Continuations have also
been used in the logic programming context to give a
denotational semantics for Prolog, to generate Prolog
compilers and to transform Prolog programs. In this paper,
we propose to introduce new built-ins in a logic
programming language to enable the user to explicitly
replace the continuations. These built-ins allow the user
to have a new control of the execution. We choose
LambdaProlog because of its higher-order syntax and
implications in the goals which are necessary for the
definition and use of these built-ins. In order to define
the built-ins, we extend to LambdaProlog the Prolog
semantics based on continuations. Then, we show that an
exception mechanism can be easily implemented using these
new built-ins. The proposed semantics is also used to prove
equivalence of goals changing the continuations.},
keywords     = {LambdaProlog, compilation, continuation, exception
handling.},
url          = {ftp://ftp.irisa.fr/local/lande/pbor-iclp93.ps.Z}
}


P. Brisset, O. Ridoux
"The Compilation of $\lambda$Prolog and its execution with MALI", INRIA, technical report, nº 1831, 1993, Rapport de recherche.

Keywords: See [bo92a] .
@TechReport{brisset:compilation:inria:93,
author       = {P. Brisset and O. Ridoux},
title        = {The Compilation of $\lambda${Prolog} and its execution
with {MALI}},
institution  = {INRIA},
type         = {Rapport de recherche},
number       = {1831},
year         = 1993,
abstract     = {See \cite{bo92a}},
keywords     = {See \cite{bo92a}},
url          = {ftp://ftp.irisa.fr/local/lande/pbor-tr-irisa687-92.ps.Z}
}


M. Ducassé
"A pragmatic survey of automated debugging", In Proceedings of the First Workshop on Automated and Algorithmic Debugging, Lecture Notes in Computer Sciences, vol. 749, P. Fritzson (Eds), Springer-Verlag, Linkoeping, Sweden, May 1993.
@InProceedings{duc93b,
author       = {M. Ducass\'{e}},
title        = {A pragmatic survey of automated debugging},
booktitle    = {Proceedings of the First Workshop on Automated and
Algorithmic Debugging},
editor       = {P. Fritzson},
year         = {1993},
month        = {May},
publisher    = {Springer-Verlag},
series       = {Lecture Notes in Computer Sciences},
volume       = {749},
abstract     = {This article proposes a structuring view of the area of
automated debugging. Nineteen automated debugging systems
are analyzed. Thirteen existing automated debugging
techniques are briefly evaluated from a pragmatic point of
view. The three underlying strategies are identified,
namely verification with respect to specification, checking
with respect to language knowledge and filtering with
respect to symptom. The verification strategy compares the
actual program with some formal specification of the
intended program. The checking strategy looks for suspect
places which do not comply with some explicit knowledge of
the programming language. The filtering strategy assumes
correct parts of the code which cannot be responsible for
the error symptom. Assertion evaluation and algorithmic
debugging are the most promising verification techniques.
Some intrinsic limitations of the checking strategy makes
it only a complementary, though helpful, debugging support.
The slicing technique should be included in any debugger.
}
}


S. Le Huitouze, P. Louvet, O. Ridoux
"Logic Grammars and $\lambda$Prolog", In 10th Int. Conf. Logic Programming, pp. 64-79, D. S. Warren (Eds), MIT Press, 1993.

Keywords: LambdaProlog, logic grammars, scope, context handling in syntactic analysis..
@InProceedings{llr93,
author       = {S. {Le Huitouze} and P. Louvet and O. Ridoux},
title        = {Logic Grammars and $\lambda${Prolog}},
publisher    = {MIT Press},
booktitle    = {10th Int. Conf. Logic Programming},
comment      = {Budapest, Hungary},
editor       = {D.S. Warren},
pages        = {64--79},
year         = 1993,
abstract     = {A logic grammar formalism called DCG (Definite Clause
Grammars), which has proved to be useful, is part of most
Prolog implementations. We develop two new logic grammar
formalisms called DCG' and lambda-HHG (higher-order
Hereditary Harrop Grammars) that can be used in
LambdaProlog implementations. The relations between DCG,
DCG', and lambda-HHG, and Prolog and LambdaProlog can be
summarized as follows: (1) The language Prolog, the DCG
formalism, and the translation of DCG into Prolog by Prolog
are classical. (2) The evolution from Prolog to
LambdaProlog is due to Miller and the advantage of using
LambdaProlog for doing natural language analysis is shown
by Pereira, and Pareschi and Miller. (3) We propose a
strongly typed variant of DCG (called DCG') for its
translation into LambdaProlog by LambdaProlog. It is a
first stage towards a more elaborate formalism. (4) A
formalism that is to DCG what LambdaProlog is to Prolog is
still missing, and also the way to translate it into
LambdaProlog. Such a formalism combines the advantage of
being grammatical and hiding the house-keeping operations
(like DCG) and of having higher-order terms as attributes
and providing a logical approach to context (like
LambdaProlog). lambda-HHG is such a formalism.},
keywords     = {LambdaProlog, logic grammars, scope, context handling in
syntactic analysis.},
url          = {ftp://ftp.irisa.fr/local/lande/slplor-iclp93.ps.Z}
}


S. Le Huitouze, P. Louvet, O. Ridoux
"Les grammaires logiques et $\lambda$Prolog", In Journées Francophones sur la Programmation en Logique, pp. 93-108, Teknea, Nîmes, France, 1993, Version française de [llr93].

Keywords: LambdaProlog, grammaires logiques, portée, représentation du contexte..
@InProceedings{lehuitouze:grammaires:jfpl:93,
author       = {S. {Le Huitouze} and P. Louvet and O. Ridoux},
title        = {Les grammaires logiques et $\lambda${Prolog}},
booktitle    = {Journ\'{e}es Francophones sur la Programmation en
Logique},
year         = 1993,
publisher    = {Teknea},
pages        = {93--108},
note         = {Version fran\c{c}aise de~\cite{llr93}},
abstract     = {La plupart des systèmes Prolog proposent un formalisme de
grammaire logique appelé DCG (Definite Clause Grammar),
dont l'utilité est reconnue. Nous présentons deux nouveaux
formalismes de grammaire logique appelé DCG' et lambda-HHG
(higher-order Hereditary Harrop Grammar)---grammaires
héréditaires de Harrop d'ordre supérieur) destinés à être
utilisés dans les systèmes LambdaProlog. Les relations
entre DCG, DCG', et lambda-HHG, d'une part, et entre Prolog
et LambdaProlog, d'autre part, peuvent être résumées de la
manière suivante. (1) Prolog, DCG et la traduction de DCG
en Prolog sont classiques. (2) Miller propose l'évolution
de Prolog à LambdaProlog, et Pereira, Pareschi et Miller
montrent l'intérêt d'utiliser LambdaProlog pour le
traitement de la langue naturelle. (3) Nous proposons une
variante fortement typée de DCG (appelée) afin de pouvoir
la traduire en LambdaProlog dans le système LambdaProlog.
C'est un premier pas vers un formalisme plus élaboré. (4)
lambda-HHG est à DCG ce que LambdaProlog est à Prolog. Ce
formalisme combine les avantages d'être grammatical et de
cacher les opération d'un analyseur (comme DCG), et d'avoir
des termes d'ordre supérieur comme attributs et de proposer
une approche logique à la représentation des contextes
(comme LambdaProlog).},
keywords     = {LambdaProlog, grammaires logiques, portée, représentation
du contexte.},
url          = {ftp://ftp.irisa.fr/local/lande/slplor-jfpl93.ps.Z}
}


M. Ducassé, Y. -J. Lin, L. Ü. Yalcinalp (Eds)
"Proceedings of IJCSLP'92 Workshop on Logic Programming Environments", November 1992, Technical Report TR 92-143, Case Western Reserve University, Cleveland.
@Proceedings{dly92,
title        = {Proceedings of IJCSLP'92 Workshop on Logic Programming
Environments},
year         = {1992},
editor       = {M. Ducass\'{e} and Y.-J. Lin and L.\"{U}. Yalcinalp},
month        = {November},
note         = {Technical Report TR 92-143, Case Western Reserve
University, Cleveland}
}


Y. Bekkers, O. Ridoux, L. Ungaro
"Dynamic Memory Management for Sequential Logic Programming Languages", In Int. Worshop on Memory Management, LNCS, vol. 637, pp. 82-102, Y. Bekkers, J. Cohen (Eds), Springer-Verlag, 1992.

Keywords: Memory management, logic programming, garbage collection, usefulness logic..
@InProceedings{bekkers:dynamic:iwmm:92,
author       = {Y. Bekkers and O. Ridoux and L. Ungaro},
title        = {Dynamic Memory Management for Sequential Logic Programming
Languages},
booktitle    = {Int. Worshop on Memory Management},
series       = {LNCS},
volume       = 637,
comment      = {Saint-Malo, France},
editor       = {Y. Bekkers and J.~Cohen},
publisher    = {Springer-Verlag},
pages        = {82--102},
year         = 1992,
abstract     = {Logic programming languages are becoming more complex
with the introduction of new features such as constraints
or terms with an equality theory. With this increase in
complexity, they require more and more sophisticated memory
management. This survey gives an insight into the memory
management problems in sequential logic programming
languages implementations; it also describes the presently
know solutions. It is meant to be understood by
non-specialists in logic programming with good knowledge of
memory management in general. We first describe a
"usefulness logic" for run-time objects. Usefulness logic
defines non-garbage objects. Next, memory management
systems are presented from the most trivial original
run-time system, with no real concern for memory problems,
to elaborated run-time systems with memory management
closely observing the usefulness logic. Finally, the choice
of a garbage collection technique is discussed in relation
with logic programming specifities.},
keywords     = {Memory management, logic programming, garbage collection,
usefulness logic.},
url          = {ftp://ftp.irisa.fr/local/lande/yborlu-iwmm92.ps.Z}
}


P. Brisset, O. Ridoux
"The Compilation of $\lambda$Prolog and its execution with MALI", IRISA, technical report, nº 687, 1992, Publication Interne.

Keywords: LambdaProlog, implementation, compilation, memory management..
@TechReport{bo92a,
author       = {P. Brisset and O. Ridoux},
title        = {The Compilation of $\lambda${Prolog} and its execution
with {MALI}},
institution  = {IRISA},
type         = {Publication Interne},
number       = {687},
year         = 1992,
abstract     = {We present a compiled implementation of LambdaProlog that
uses the abstract memory MALI for representing the
execution state. LambdaProlog is a logic programming
language allowing a more general clause form than Standard
Prolog's (namely hereditary Harrop formulas instead of Horn
formulas) and using simply typed lambda-terms as a term
domain instead of first order terms. The augmented clause
form causes the program (a set of clauses) and the
signature (a set of constants) to be changeable in a very
disciplined way. The new term domain has a semi-decidable
and infinitary unification theory, and it introduces the
need for a beta-reduction operation at run-time. MALI is an
abstract memory that is suitable for storing the
search-state of depth-first search processes. Its main
feature is its efficient memory management. We have used an
original LambdaProlog-to-C translation along which
predicates are transformed into functions operating on
continuations for handling failure and success in
unifications, and changes in signatures and programs. Two
keywords of this implementation are sharing'' and
folding'' of representations. Sharing amounts to
recognising that some representation already exists and to
reuse it. Folding amounts to recognising that two different
representations represent the same thing and to replace one
by the other.},
keywords     = {LambdaProlog, implementation, compilation, memory
management.},
url          = {ftp://ftp.irisa.fr/local/lande/pbor-tr-irisa687-92.ps.Z}
}


P. Brisset, O. Ridoux
"The Architecture of an Implementation of $\lambda$Prolog: Prolog/Mali", In Workshop on $\lambda$Prolog, Philadelphia, 1992.

Keywords: LambdaProlog, implementation, compilation, memory management..
@InProceedings{bo92b,
author       = {P. Brisset and O. Ridoux},
title        = {The Architecture of an Implementation of
$\lambda${Prolog}: {Prolog/Mali}},
booktitle    = {Workshop on $\lambda$Prolog},
year         = 1992,
abstract     = {LambdaProlog is a logic programming language accepting a
more general clause form than standard Prolog (namely
hereditary Harrop formulas instead of Horn formulas) and
using simply typed lambda-terms as a term domain instead of
first order terms. Despite these extensions, it is still
amenable to goal-directed proofs and can still be given
procedural semantics. However, the execution of
LambdaProlog programs requires several departures from the
standard resolution scheme. First, the augmented clause
form causes the program (a set of clauses) and the
signature (a set of constants) to be changeable, but in a
very disciplined way. Second, the new term domain has a
semi-decidable and infinitary unification theory, and it
introduces the need for a beta-reduction operation at
run-time. MALI is an abstract memory that is suitable for
storing the search-state of depth-first search processes.
Its main feature is its efficient memory management. We
have used an original LambdaProlog-to-C translation:
predicates are transformed into functions operating on
several continuations. The compilation scheme is sometimes
an adaptation of the standard Prolog scheme, but at other
times it has to handle new features such as types,
beta-reduction and delayed unification. Two keywords of
this implementation are "sharing" and "folding" of
representations. Sharing amounts to recognising that some
representation already exists and reusing it. Folding
amounts to recognising that two different representations
represent the same thing and replacing one by the other. We
assume a basic knowledge of Prolog and LambdaProlog.},
keywords     = {LambdaProlog, implementation, compilation, memory
management.},
url          = {ftp://ftp.irisa.fr/local/lande/pbor-wlp92.ps.Z}
}


M. Ducassé
"A general trace query mechanism based on Prolog", In International Symposium on Programming Language Implementation and Logic Programming, Lecture Notes in Computer Science, vol. 631, pp. 400-414, M. Bruynooghe, M. Wirsing (Eds), Springer-Verlag, August 1992.
@InProceedings{duc92f,
author       = {M. Ducass\'{e}},
url          = {ftp://ftp.irisa.fr/local/lande/md-plilp92.ps.gz},
title        = {A general trace query mechanism based on {Prolog}},
booktitle    = {International Symposium on Programming Language
Implementation and Logic Programming},
year         = {1992},
editor       = {M. Bruynooghe and M. Wirsing},
publisher    = {Springer-Verlag},
series       = {Lecture Notes in Computer Science},
volume       = {631},
month        = {August},
pages        = {400-414},
abstract     = {We present a general trace query language which is a
solution to the ever growing command sets of other tracers.
It provides all the required generality while being very
simple and efficient. We model a program execution into a
trace which is a stream of events. Execution events have a
uniform representation, and can be analysed by Prolog
programs. With this approach and thanks to the expressive
power of Prolog, two high-level primitives plus Prolog are
enough to provide a general trace query language. With a
few optimizations this language can work on large
executions without any loss of performance, if compared to
traditional tracers. This paper describes the trace query
mechanism from its high level specification down to some
implementation details. The proposed model of trace query
depends only on the sequentiality of the execution, and the
principles behind the design of the optimizations do not
depend on the traced language. }
}


M. Ducassé
"Opium: An advanced debugging system", In Proceedings of the Second Logic Programming Summer School, G. Comyn, N. Fuchs (Eds), Springer-Verlag, LNAI 636, September 1992, Esprit Network of Excellence in Computational Logic COMPULOG-NET.
@InProceedings{duc92d,
author       = {M. Ducass\'{e}},
url          = {ftp://ftp.irisa.fr/local/lande/md-lpss92.ps.gz},
title        = {Opium: An advanced debugging system},
organization = {Esprit Network of Excellence in Computational Logic
{COMPULOG-NET}},
booktitle    = {Proceedings of the Second Logic Programming Summer
School},
year         = {1992},
editor       = {G. Comyn and N. Fuchs},
publisher    = {Springer-Verlag, LNAI 636},
month        = {September},
abstract     = {The data used by program analysis in general is often
restricted to the source code of the analysed programs.
However, there is a complementary source of information,
namely traces of program executions. Usual tracers, which
extract this trace information, do not allow for general
trace analysis. Opium, our debugger for Prolog, sets up a
framework where program sources and traces of program
executions can be jointly analysed. As the debugging
process is heuristic and not all the debugging strategies
have been identified so far, Opium is programmable. In
particular, its trace query language gives more flexibility
and more power than the hard coded command sets of usual
tracers. This trace query language is based on Prolog.
Opium is therefore both a helpful tool for Prolog and a
nice application of Prolog. The most innovative extensions
of Opium compute abstract views of Prolog executions to
help users understand the behaviours of programs. In
particular they help them understand how error symptoms
general information about Opium. A debugging session is
then commented in detail. }
}


M. Ducassé
"Analysis of failing Prolog executions", In Actes des Journées Francophones sur la Programmation Logique, Mai 1992.
@InProceedings{duc92c,
author       = {M. Ducass\'{e}},
title        = {Analysis of failing {Prolog} executions},
booktitle    = {Actes des Journ\'{e}es Francophones sur la Programmation
Logique},
year         = {1992},
month        = {Mai},
abstract     = {The result of a Prolog execution can simply be no'',
when the programmer is expecting something else. This
symptom is typical of Prolog, and especially requires the
help of an execution tracer to get clues of what the
problem can be. We present a solution which helps
programmers to understand how unexpected failures have
occurred. We first propose a hierarchy of failing goals. We
argue that there is one kind of leaf failures which is
interesting to track at the first place. Then we give the
algorithm for our leaf failure tracking and two examples
illustrating its use. }
}


M. Ducassé
"A trace analyser to prototype explanations", In Proceedings of JICSLP'92 Workshop on Logic Programming Environments, Washington D.C., November 1992, Technical Report TR 92-143, Case Western Reserve University, Cleveland.
@InProceedings{duc92b,
author       = {M. Ducass{\'e}},
title        = {A trace analyser to prototype explanations},
booktitle    = {Proceedings of JICSLP'92 Workshop on Logic Programming
Environments},
year         = {1992},
month        = {November},
note         = {Technical Report TR 92-143, Case Western Reserve
University, Cleveland},
abstract     = {Automated debugging and expert system explanations have in
common to aim at helping people to understand executions.
We have designed an extendable trace analyser for the
purpose of automated debugging, which we propose to use to
prototype expert system explanations. Two examples
illustrate how simple it is to implement abstract tracing
of executions and how easy it is to play with them. }
}


M. Ducassé
"An extendable trace analyser to support automated debugging", University of Rennes I, France, PhD thesis, June 1992, European Doctorate.
@PhDThesis{duc92,
author       = {M. Ducass\'{e}},
title        = {An extendable trace analyser to support automated
debugging},
school       = {University of Rennes I, France},
month        = {June},
year         = {1992},
note         = {European Doctorate},
abstract     = {The dissertation describes the innovative features of
Opium, a high-level debugging environment for Prolog,
designed and implemented at ECRC between 1985 and 1991.
Debugging is a costly process, and automating it would
significantly reduce the cost of software production and
maintenance. However, it is unrealistic to aim at fully
automating the task. In particular programmers have to
understand rapidly changing situations, examining large
amounts of data. In the current state of the art it is
beyond the capabilities of computers to take the place of
programmer's understanding. Nevertheless, computers can
significantly help programmers to select the data to be
analysed. The data used by program analysis in general is
often restricted to the source code of the analysed
programs. However, there is a complementary source of
information, namely traces of program executions. An
execution trace contains less general information than the
program source, but it tells how the program behaves in a
particular case. Furthermore, there are intrinsically
dynamic aspects in a program which are best analysed at
execution time, for example uses of read/write. These
remarks suggested to build the automated debugging
functionalities of Opium on top of an existing tracer,
extending it to a general trace and source analyser. The
most important features of Opium, not to be found in other
debuggers, are as follows. - It provides a trace query
language which is a solution to the ever growing command
sets of other tracers. With two primitives plus Prolog,
users can already specify more precise trace queries than
with the hard coded commands of other tracers. - Opium is
programmable and extendable. It is thus an environment
where debugging strategies can be easily programmed and
integrated. Some strategies are already implemented. -
Abstract views of executions are proposed as a basis for
automated debugging. They help users to understand the
behaviours of programs by browsing through executions at a
higher level than single steppers. Opium is fully
implemented. More than 20 academic sites have recently
requested Opium prototype, and some are actually
implementing new abstract views. }
}


M. Ducassé, G. Ferrand (Eds)
"Proceedings of ICLP'91 Workshop on Logic Programming Environments", June 1991, Technical Report, University of Orléans, France, LIFO N 91-61.
@Proceedings{df91,
title        = {Proceedings of ICLP'91 Workshop on Logic Programming
Environments},
year         = {1991},
editor       = {M. Ducass\'{e} and G. Ferrand},
month        = {June},
note         = {Technical Report, University of Orl\'{e}ans, France, LIFO
N 91-61}
}


P. Brisset, O. Ridoux
"Naïve Reverse Can Be Linear", In 8th Int. Conf. Logic Programming, pp. 857-870, K. Furukawa (Eds), MIT Press, 1991.

Keywords: LambdaProlog, implementation, function-lists, higher-order unification..
@InProceedings{brisset:naivereverse:iclp:91,
author       = {P. Brisset and O. Ridoux},
title        = {Na{\"\i}ve Reverse Can Be Linear},
booktitle    = {8th Int. Conf. Logic Programming},
editor       = {K. Furukawa},
publisher    = {MIT Press},
comment      = {Paris, France},
pages        = {857--870},
year         = 1991,
abstract     = {We propose a new implementation of logic programming with
higher-order terms. In order to illustrate the properties
of our implementation, we apply the coding of lists as
functions to the context of logic programming. As a
side-effect, we show that higher-order unification is a
good tool for manipulating the function-lists. It appears
that the efficiency of the program thus obtained relies
critically upon the implementation of higher-order
operations (unification and reduction). In particular, we
show that a good choice for data-structures and reduction
strategy yields a linear naïve reverse.},
keywords     = {LambdaProlog, implementation, function-lists,
higher-order unification.},
url          = {ftp://ftp.irisa.fr/local/lande/pbor-iclp91.ps.Z}
}


M. Ducassé
"Abstract views of Prolog executions in Opium", In Proceedings of the International Logic Programming Symposium, pp. 18-32, V. Saraswat, K. Ueda (Eds), MIT Press, San Diego, USA, October 1991.
@InProceedings{duc91c,
author       = {M. Ducass\'{e}},
url          = {ftp://ftp.irisa.fr/local/lande/md-ilps91.ps.gz},
title        = {Abstract views of {Prolog} executions in {Opium}},
booktitle    = {Proceedings of the International Logic Programming
Symposium},
editor       = {V. Saraswat and K. Ueda},
year         = {1991},
month        = {October},
publisher    = {MIT Press},
pages        = {18-32},
location     = {was IR-LP-31-28},
abstract     = {Opium is a system for analysing and debugging Prolog
programs. Its kernel comprises an execution tracer and a
programming language with a full set of primitives for
trace and source analysis. In this paper we show the power
of Opium for supporting abstract views of Prolog
executions. Abstract views give high-level points of view
about the execution. They filter out irrelevant details;
they restructure the remaining information; and they
compact it so that the amount of information given at each
step has a reasonable size. The examples of abstract views
given in the following are a goal execution profile, some
data abstractions, an instantiation profile, a failure
analysis and a kind of explanation for an expert system
written in Prolog. }
}


M. Ducassé, A. -M. Emde
"Opium: a debugging environment for Prolog development and debugging research", ACM Software Engineering Notes, vol. 16, nº 1, pp. 54-59, January 1991, Demonstration presented at the Fourth Symposium on Software Development Environments.
@Article{de91d,
author       = {M. Ducass\'{e} and A.-M. Emde},
title        = {Opium: a debugging environment for {Prolog} development
and debugging research},
journal      = {ACM Software Engineering Notes},
year         = {1991},
volume       = {16},
number       = {1},
pages        = {54-59},
month        = {January},
note         = {Demonstration presented at the Fourth Symposium on
Software Development Environments},
abstract     = {Opium is an extensible debugging environment for PROLOG
providing high-level debugging facilities for programmers
and debugging experts. In the design of debuggers there are
two tasks which are often mixed, extraction and analysis of
debugging information. The aim of the extraction task is to
collect the whole debugging information so that users do
not miss any important information about their program. On
the other hand, the aim of the analysis task is to restrict
in an accurate way the amount of debugging information
shown to the user so that the latter has to examine only
the relevant parts. This task clearly depends on the
debugging situation and, to our point of view, there is no
general restriction which can be done a priori. However,
the two tasks are usually mixed and hard-coded, the result
is that not enough relevant information and too much
useless information is displayed. In Opium the two tasks
are clearly separated. The extraction module collects the
whole debugging information (execution trace and program
source) which is then available for the analysis module.
The presentation concentrates on the analysis module,
discussing the main aspects of Opium: programmability,
high-level debugging, extensibility mechanisms,
meta-debugging, support for end-users and debugging
experts. }
}


M. Ducassé, A. -M. Emde
"A High-level Debugging Environment for Prolog. Opium User's Manual", ECRC, technical report, nº TR-LP-60, May 1991, Technical Report.
@TechReport{de91,
author       = {M. Ducass\'{e} and A.-M. Emde},
url          = {ftp://ftp.ecrc.de/pub/eclipse/doc/dvi/opium-manual.ps.Z},
title        = {A High-level Debugging Environment for {Prolog}. {Opium}
User's Manual},
institution  = {ECRC},
number       = {TR-LP-60},
type         = {Technical Report},
month        = {May},
year         = {1991}
}


"Syntactic Detection of Single-Threading Using Continuations", In Proc. of the 5th ACM Conf. on Functional Prog. Lang. and Comp. Arch., LNCS, vol. 523, pp. 241-258, Springer-Verlag, Cambridge, MA, USA, August 1991.

Keywords: Globalization, single-threading, in-place update, CPS conversion..
@InProceedings{fpca91,
title        = {Syntactic Detection of Single-Threading Using
Continuations},
booktitle    = {Proc. of the 5th ACM Conf. on Functional Prog. Lang. and
Comp. Arch.},
year         = 1991,
url          = {ftp://ftp.irisa.fr/local/lande/pf-fpca91.ps.Z},
keywords     = {Globalization, single-threading, in-place update, CPS
conversion.},
abstract     = {We tackle the problem of detecting global variables in
functional programs. We present syntactic criteria for
single-threading which improves upon previous solutions
(both syntactic and semantics-based) in that it applies to
higher-order languages and to most sequential evaluation
strategies. The main idea of our approach lies in the use
of continuations. One advantage of continuation expressions
is that evaluation ordering is made explicit in the syntax
of expressions. So, syntactic detection of single-threading
is simpler and more powerful on continuation expressions.
We present the application of the analysis to the
compilation of functional languages, semantics-directed
compiler generation and globalization-directed
expressions into single-threaded ones). Our results can
also be turned to account to get single-threading criteria
on regular lambda-expressions for different sequential
evaluation orders.},
volume       = 523,
series       = {LNCS},
publisher    = {Springer-Verlag},
month        = {August},
pages        = {241-258}
}


"Compilation of functional languages by program transformation", ACM Transactions on Programming Languages and Systems, vol. 13, nº 1, pp. 21-51, 1991.
@Article{TOPLAS91,
author       = {P. Fradet and D. Le Métayer},
title        = {Compilation of functional languages by program transformation},
journal      = {{ACM} Transactions on Programming Languages and Systems},
year         = {1991},
volume       = {13},
number       = {1},
pages        = {21--51},
abstract     = {One of the most important issues concerning functional languages is the efficiency and the correctness
of their implementation. We focus on sequential implementations for conventional von Neumann
computers. The compilation process is described in terms of program transformations in the functional
framework. The original functional expression is transformed into a functional term that can be seen
as a traditional machine code. The two main steps are the compilation of the computation rule by
the introduction of continuation functions and the compilation of the environment management
using combinators. The advantage of this approach is that we do not have to introduce an abstract
machine, which makes correctness proofs much simpler. As far as efficiency is concerned, this
approach is promising since many optimizations can be described and formally justified in the
functional framework.}
}


Generated on Thu Oct 10 09:55:54 MET DST 2002 by genet
The generator is entirely written in Haskell (sources will be available soon)