BACK TO INDEX

All publications sorted by title
2006
  1. B. Botella, A. Gotlieb, and C. Michel. Symbolic execution of floating-point computations. The Software Testing, Verification and Reliability journal, 2006. Note: To appear. Keyword(s): Symbolic execution, Floating-point computations, Automatic test data generation, Constraint solving.
    Abstract:
    Symbolic execution is a classical program testing technique which evaluates a selected control flow path with symbolic input data. A constraint solver can be used to enforce the satisfiability of the extracted path conditions as well as to derive test data. Whenever path conditions contain floating-point computations, a common strategy consists of using a constraint solver over the rationals or the reals. Unfortunately, even in a fully IEEE-754 compliant environment, this leads not only to approximations but also can compromise correctness: a path can be labelled as infeasible although there exists floating-point input data that satisfy it. In this paper, we address the peculiarities of the symbolic execution of program with floating-point numbers. Issues in the symbolic execution of this kind of programs are carefully examined and a constraint solver is described that supports constraints over floating-point numbers. Preliminary experimental results demonstrate the value of our approach
    [bibtex-key = BGM05] [bibtex-entry]


  2. Frédéric Besson, Guillaume Dufay, and Thomas Jensen. A Formal Model of Access Control for Mobile Interactive Devices. In 11th European Symposium On Research In Computer Security (ESORICS'06), volume 4189 of Lecture Notes in Computer Science, 2006. Springer. [WWW ]
    Abstract:
    This paper presents an access control model for programming applications in which the access control to resources can employ user interaction to obtain the necessary permissions. This model is inspired by and improves on the Java MIDP security architecture used in Java-enabled mobile telephones. We consider access control permissions with multiplicities in order to allow to use a permission a certain number of times. An operational semantics of the model and a formal definition of what it means for an application to respect the security model is given. A static analysis which enforces the security model is defined and proved correct. A constraint solving algorithm implementing the analysis is presented.
    [bibtex-key = ESORICS-06-BJD] [bibtex-entry]


  3. F. Besson, T. Jensen, and D. Pichardie. A PCC Architecture based on Certified Abstract Interpretation. In Proc. of 1st International Workshop on Emerging Applications of Abstract Interpretation (EAAI'06), ENTCS, 2006. Springer-Verlag. [WWW ]
    Abstract:
    Proof-Carrying Code (PCC) is a technique for downloading mobile code on a host machine while ensuring that the code adheres to the host's security policy. We show how certified abstract interpretation can be used to build a PCC architecture where the code producer can produce program certificates automatically. Code consumers use proof checkers derived from certified analysers to check certificates. Proof checkers carry their own correctness proofs and accepting a new proof checker amounts to type checking the checker in Coq. Fixpoint compression techniques are used to obtain compact certificates. The PCC architecture has been evaluated experimentally on a byte code language for which we have designed an interval analysis that allows to generate certificates ascertaining that no array-out-of-bounds accesses will occur.
    [bibtex-key = EAAI:BessonJensenPichardie] [bibtex-entry]


  4. Y. Boichut and T. Genet. Feasible Trace Reconstruction for Rewriting Approximations.. In RTA, volume 4098 of LNCS, pages 123-135, 2006. Springer. [bibtex-key = BoichutG-RTA06] [bibtex-entry]


  5. Y. Glouche, T. Genet, O. Heen, and O. Courtay. A Security Protocol Animator Tool for AVISPA. In ARTIST-2 workshop on security of embedded systems, Pisa (Italy), 2006. Note: \small http://www.irisa.fr/lande/genet/publications.html. [WWW ] [bibtex-key = Animator] [bibtex-entry]


  6. Pascal Sotin, David Cachera, and Thomas Jensen. Quantitative Static Analysis over semirings: analysing cache behaviour for Java Card. In Quantitative Aspects of Programming Languages, 2006. [bibtex-key = SCJ06] [bibtex-entry]


  7. G. Le Guernic, A. Banerjee, and D. Schmidt. Automaton-based Non-interference Monitoring. Technical report 2006-1, Department of Computing and Information Sciences, College of Engineering, Kansas State University, 234 Nichols Hall, Manhattan, KS 66506, USA, April 2006.
    Abstract:
    This report presents a non-interference monitoring mechanism for sequential programs. Non-interference is a property of the information flows of a program. It implies the respect of the confidentiality of the secret information manipulated. The approach taken uses an automaton based monitor. During the execution, abstractions of the events occurring are sent to the automaton. The automaton uses those inputs to track the information flows and to control the execution by forbidding or editing dangerous actions. The mechanism proposed is proved to be sound and more efficient than a type system similar to the historical one developed by Volpano, Smith and Irvine.
    [bibtex-key = LeGuernic2006abnim] [bibtex-entry]


2005
  1. David Pichardie. Interprétation abstraite en logique intuitionniste : extraction d'analyseurs Java certiés. PhD thesis, Université Rennes 1, Rennes, France, December 2005. [bibtex-key = PicThe05] [bibtex-entry]


  2. F. Besson, T. de Grenier de Latour, and T. Jensen. Interfaces for stack inspection. Journal of Functional Programming, 15(2):179--217, 2005. [WWW ]
    Abstract:
    Stack inspection is a mechanism for programming secure applications in the presence of code from various protection domains. Run-time checks of the call stack allow a method to obtain information about the code that (directly or indirectly) invoked it in order to make access control decisions. This mechanism is part of the security architecture of Java and the .NET Common Language Runtime. A central problem with stack inspection is to determine to what extent the local checks inserted into the code are sufficient to guarantee that a global security property is enforced. A further problem is how such verification can be carried out in an incremental fashion. Incremental analysis is important for avoiding re-analysis of library code every time it is used, and permits the library developer to reason about the code without knowing its context of deployment. We propose a technique for inferring interfaces for stack-inspecting libraries in the form of secure calling context for methods. By a secure calling context we mean a pre-condition on the call stack sufficient for guaranteeing that execution of the method will not violate a given global property. The technique is a constraint-based static program analysis implemented via fixed point iteration over an abstract domain of linear temporal logic properties.
    [bibtex-key = Besson:05:Interfaces] [bibtex-entry]


  3. David Cachera, Thomas Jensen, David Pichardie, and Vlad Rusu. Extracting a Data Flow Analyser in Constructive Logic. Theoretical Computer Science, 2005.
    Abstract:
    We show how to formalise a constraint-based data flow analysis in the specification language of the Coq proof assistant. This involves defining a dependent type of lattices together with a library of lattice functors for modular construction of complex abstract domains. Constraints are expressed in an intermediate representation that allows for both efficient constraint resolution and correctness proof of the analysis with respect to an operational semantics. The proof of existence of a correct, minimal solution to the constraints is constructive which means that the extraction mechanism of Coq provides a provably correct data flow analyser in Ocaml. The library of lattices together with the intermediate representation of constraints are defined in an analysis-independent fashion that provides a basis for a generic framework for proving and extracting static analysers in Coq.
    [bibtex-key = CaJePiRu05TCSExtractAnalyser] [bibtex-entry]


  4. David Cachera and Katell Morin-Allory. Verification of safety properties for parameterized regular systems. Trans. on Embedded Computing Systems, 4(2):228--266, 2005.
    Abstract:
    We propose a combination of heuristic methods to prove properties of control signals for regular systems defined by means of affine recurrence equations (AREs). We benefit from the intrinsic regularity of the underlying polyhedral model to handle parameterized systems in a symbolic way. Our techniques apply to safety properties. The general proof process consists in an iteration that alternates two heuristics. We are able to identify the cases when this iteration will stop in a finite number of steps. These techniques have been implemented in a high level synthesis environment based on the polyhedral model.
    [bibtex-key = CacMorTECS05] [bibtex-entry]


  5. David Cachera, Thomas Jensen, David Pichardie, and Gerardo Schneider. Certified Memory Usage Analysis. In Proc of 13th International Symposium on Formal Methods (FM'05), Lecture Notes in Computer Science, 2005. Springer-Verlag.
    Abstract:
    We present a certified algorithm for resource usage analysis, applicable to languages in the style of Java byte code. The algorithm verifies that a program executes in bounded memory. The algorithm is destined to be used in the development process of applets and for enhanced byte code verification on embedded devices. We have therefore aimed at a low-complexity algorithm derived from a loop detection algorithm for control flow graphs. The expression of the algorithm as a constraint-based static analysis of the program over simple lattices provides a link with abstract interpretation that allows to state and prove formally the correctness of the analysis with respect to an operational semantics of the program. The certification is based on an abstract interpretation framework implemented in the Coq proof assistant which has been used to provide a complete formalisation and formal verification of all correctness proofs.
    [bibtex-key = CaJePiSc05MemoryUsage] [bibtex-entry]


  6. David Cachera, Thomas Jensen, and Pascal Sotin. Estimating Cache Misses with Semi-Modules and Quantitative Abstraction. In Proc. of IST-APPSEM II Workshop on Applied Semantics, 2005. [bibtex-key = Sotin05] [bibtex-entry]


  7. T. Denmat, M. Ducassé, and O. Ridoux. Data mining and cross-checking of execution traces. A re-interpretation of Jones, Harrold and Stasko test information visualization. In T. Ellman and A. Zisman, editors, Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering, November 2005. ACM Press. Note: See RR-5661 for a long version of this article. Keyword(s): Software Engineering, Debugging, Artificial Intelligence, Learning, Knowledge acquisition.
    Abstract:
    The current trend in debugging and testing is to cross-check information collected during several executions. Jones et al., for example, propose to use the instruction coverage of passing and failing runs in order to visualize suspicious statements. This seems promising but lacks a formal justification. In this paper, we show that the method of Jones et al. can be re-interpreted as a data mining procedure. More particularly, they define an indicator which characterizes association rules between data. With this formal framework we are able to explain intrinsic limitations of the above indicator.
    [bibtex-key = denmat05b] [bibtex-entry]


  8. T. Denmat, A. Gotlieb, and M. Ducassé. Proving or Disproving Likely Invariants with Constraint Reasoning. In A. Serebrenik, editor, Proceedings of the 15th Workshop on Logic-based Method for Programming Environments, Sitges, SPAIN, October 2005. Note: Satelite event of International Conference on Logic Programming (ICLP'2005). Published in Computer Research Repository cs.SE/0508108. [WWW ] Keyword(s): Software Engineering, Testing and Debugging, Program verification, Constraint and logic languages.
    Abstract:
    A program invariant is a property that holds for every execution of the program. Recent work suggest to infer likely-only invariants, via dynamic analysis. A likely invariant is a property that holds for some executions but is not guaranteed to hold for all executions. In this paper, we present work in progress addressing the challenging problem of automatically verifying that likely invariants are actual invariants. We propose a constraint-based reasoning approach that is able, unlike other approaches, to both prove or disprove likely invariants. In the latter case, our approach provides counter-examples. We illustrate the approach on a motivating example where automatically generated likely invariants are verified.
    [bibtex-key = DGD05] [bibtex-entry]


  9. Pascal Fradet and Stéphane Hong Tuan Ha. Systèmes de gestion de ressources et aspects de disponibilité. In 2ème Journée Francophone sur le Développement de Logiciels par Aspects (JFDLPA 2005), Lille, France, September 2005. [WWW ] Keyword(s): availability, security, AOP.
    Abstract:
    In this paper, we consider resource management in isolation (separation of concerns) and the prevention of denial of service (i.e. availability) as aspects. We concentrate on denials of service caused by resource management (starvations, deadlocks). Our aspects specify time limits or orderings in the allocation of resources. They can be seen as the specification of an availability policy. The approach relies on timed automata to specify services and aspects. It allows us to implement weaving as an automata product and to use model-checking tools to verify that aspects enforce the required availability properties.
    [bibtex-key = FH05] [bibtex-entry]


  10. A. Gotlieb, T. Denmat, and B. Botella. Constraint-based test data generation in the presence of stack-directed pointers. In 20th IEEE/ACM International Conference on Automated Software Engineering (ASE'05), Long Beach, CA, USA, Nov. 2005. Note: 4 pages, short paper. Keyword(s): Constraint-Based Test data generation, Constraint satisfaction, stack-directed pointers.
    Abstract:
    Constraint-Based Test data generation (CBT) exploits constraint satisfaction techniques to generate test data able to kill a given mutant or to reach a selected branch in a program. When pointer variables are present in the program, aliasing problems may arise and may lead to the failure of current CBT approaches. In our work, we propose an overall CBT method that exploits the results of an intraprocedural points-to analysis and provides two specific constraint combinators for automatically generating test data able to reach a selected branch. Our approach correctly handles multi-levels stack-directed pointers that are mainly used in real-time control systems. The method has been fully implemented in the test data generation tool INKA and first experiences in applying it to a variety of existing programs tend to show the interest of the approach.
    [bibtex-key = GDB05b] [bibtex-entry]


  11. A. Gotlieb, T. Denmat, and B. Botella. Goal-oriented test data generation for programs with pointer variables. In 29th IEEE Annual International Computer Software and Applications Conference (COMPSAC'05), Edinburh, Scotland, pages 449-454, July 2005. Note: 6 pages. Keyword(s): Goal-oriented test data generation, Constraint Logic Programming, Static Single Assignment form, pointer variables.
    Abstract:
    Automatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs goal-oriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. This paper proposes to extend an existing goal-oriented test data generation technique to deal with multi-level pointer variables. The approach exploits the results of an intraprocedural flow-sensitive points-to analysis to automatically generate goal-oriented test data at the unit testing level. Implementation is in progress and a few examples are presented.
    [bibtex-key = GDB05a] [bibtex-entry]


  12. S.D. Gouraud and A. Gotlieb. Utilisation des CHRs pour générer des cas de test fonctionnel pour la Machine Virtuelle Java Card. In Premières Journées Francophones de Programmation par Contraintes (JFPC'05), Lens, FRANCE, 8-10 juin 2005.
    Abstract:
    Le test fonctionnel basé sur une spécification formelle consiste à dériver des cas de test à partir d'un modèle formel pour détecter des fautes dans son implémentation. Dans nos travaux, nous étudions l'utilisation des \emph{Constraint Handling Rules} (CHRs) pour automatiser la génération de cas de test fonctionnel basée sur un modèle formel. Notre étude de cas est un modèle de la spécification de la Machine Virtuelle Java Card (JCVM) écrite dans un sous ensemble du langage Coq. Dans cet article, nous définissons une traduction automatique de ce modèle sous forme de règles CHR dans le but de générer des cas de test pour la JCVM. Le point clé de notre approche réside dans l'utilisation des \emph{deep guards} pour modéliser fidèlement la sémantique de notre modèle formel de la JCVM. Ensuite, nous proposons une approche globale pour la génération de cas de test fonctionnel basée sur les CHRs qui peut \^etre appliquée à d'autres modèles formels.
    [bibtex-key = GG05] [bibtex-entry]


  13. G. Le Guernic and Thomas Jensen. Monitoring Information Flow. In Andrei Sabelfeld, editor, Proceedings of the 2005 Workshop on Foundations of Computer Security (FCS'05), pages 19--30, June 2005. DePaul University. Note: LICS'05 Affiliated Workshop.
    Abstract:
    We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which are safe and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data.
    [bibtex-key = LeGuernic2005mif] [bibtex-entry]


  14. G. Le Guernic and Julien Perret. FL-system's Intelligent Cache. In Alexandre Vautier and Sylvie Saget, editors, Proceedings of Majecstic 2005, pages 79--88, November 2005. [bibtex-key = leguernic05flic] [bibtex-entry]


  15. L. Langevine and M. Ducassé. A Tracer Driver for Hybrid Execution Analyses. In Proceedings of the 6th Automated Debugging Symposium, September 2005. ACM Press. Note: See RR-5611 for a longer version of this article. [WWW ] Keyword(s): Software Engineering, Debugging, Monitors, Tracing, Programming Environments.
    Abstract:
    Tracers provide users with useful information about program executions. In this paper we propose a ``tracer driver'', from a single tracer, it provides a powerful front-end for multiple dynamic analysis tools while limiting the overhead of the trace generation. The tracer driver can be used both synchronously and asynchronously. The relevant execution events are specified by flexible event patterns and a large variety of trace data can be given either systematically or ``on demand''. The proposed tracer driver has been designed and experimented in the context of constraint logic programming, within GNU-Prolog. Its principles are, however, independent of the traced programming language. Experimental measures show that the flexibility and power of the described architecture are also the basis of reasonable performances.
    [bibtex-key = langevine05] [bibtex-entry]


  16. L. Langevine and M. Ducassé. A Tracer Driver for Versatile Dynamic Analyses of Constraint Logic Programs. In A. Serebrenik, editor, Proceedings of the 15th Workshop on Logic-based Method for Programming Environments, Sitges, SPAIN, October 2005. Note: Satelite event of International Conference on Logic Programming (ICLP'2005). Published in Computer Research Repository cs.SE/0508105. [WWW ] Keyword(s): Software Engineering, Debugging, Monitors, Tracing, Programming Environments.
    Abstract:
    Programs with constraints are hard to debug. In this paper, we describe a general architecture to help develop new debugging tools for constraint programming. The possible tools are fed by a single general-purpose tracer. A tracer-driver is used to adapt the actual content of the trace, according to the needs of the tool. This enables the tools and the tracer to communicate in a client-server scheme. Each tool describes its needs of execution data thanks to event patterns. The tracer driver scrutinizes the execution according to these event patterns and sends only the data that are relevant to the connected tools. Experimental measures show that this approach leads to good performance in the context of constraint logic programming, where a large variety of tools exists and the trace is potentially huge.
    [bibtex-key = langevine05b] [bibtex-entry]


  17. Katell Morin-Allory and David Cachera. Proving Parameterized Systems: The Use of Pseudo-Pipelines in Polyhedral Logic. In Proc. of 13th CHARME conference, Lecture Notes in Computer Science, pages 376--379, 2005. Springer-Verlag. [bibtex-key = MorCacCHARME05] [bibtex-entry]


  18. F. Besson, T. Jensen, and D. Pichardie. A PCC Architecture based on Certified Abstract Interpretation. Research Report RR-5751, INRIA, November 2005. Note: Also Publication Interne IRISA PI-1764. [PDF ] [bibtex-key = RR-5751] [bibtex-entry]


  19. David Cachera and Katell Morin-Allory. Proving Parameterized Systems: the use of a widening operator and pseudo-pipelines in polyhedral logic. Technical report TIMA-RR--05/04-01--FR, TIMA-IMAG, 2005.
    Abstract:
    We propose proof techniques and tools for the formal verification of regular parameterized systems. These systems are expressed in the polyhedral model, which combines affine recurrence equations with index sets of polyhedral shape. We extend a previously defined proof system based on a polyhedral logic with the detection of pseudo-pipelines, that are particular patterns in the variable definitions generalizing the notion of pipeline. The combination of pseudo-pipeline detection with the use of a simple widening operator greatly improves the effectiveness of our proof techniques.
    [bibtex-key = CacMorRR05] [bibtex-entry]


  20. T. Denmat, M. Ducassé, and O. Ridoux. Data Mining and Cross-checking of Execution Traces. A re-interpretation of Jones, Harrold and Stasko test information visualization (Long version). Research Report RR-5661, INRIA, August 2005. Note: Also Publication Interne IRISA PI-1743. [WWW ] [bibtex-key = denmat05c] [bibtex-entry]


  21. L. Langevine and M. Ducassé. A Tracer Driver to Enable Concurrent Dynamic Analyses. Research Report RR-5611, INRIA, June 2005. [WWW ] [bibtex-key = langevine05c] [bibtex-entry]


2004
  1. M. Ducassé, U. Nilsson, and D. Seipel, editors. Proceedings of the First International Workshop on Teaching Logic Programming: TeachLP, http://www.ep.liu.se/ecp/012/, September 2004. Linköping Electronic Conference Proceedings, Issue No. 12. Note: ISSN 1650-3686 (print), 1650-3740 (www). [bibtex-key = ducasse04b] [bibtex-entry]


  2. M. Huisman and T. Jensen, editors. J. Logic and Algebraic Programming. Speical issue on Formal Methods for Smart Cards, vol. 58(1-2), 2004. Elsevier. [bibtex-key = Huisman:04:JLAPSmartCards] [bibtex-entry]


  3. Ludovic Langevine. Observation de programmes avec contraintes et traces d'exécutions : une sémantique générique et une architecture d'analyse dynamique. PhD thesis, INSA de Rennes, Décembre 2004. [bibtex-key = langevine04d] [bibtex-entry]


  4. B. Morin. Corrélation d'alertes issues d'outils de détection d'intrusions avec prise en compte d'informations sur le système surveillé. PhD thesis, INSA de Rennes, Février 2004. [bibtex-key = Morin04] [bibtex-entry]


  5. Katell Morin-Allory. Vérification formelle dans le modèle polyédrique. PhD thesis, Université de Rennes 1, Octobre 2004. [bibtex-key = alloy-morin04] [bibtex-entry]


  6. L. Langevine, P. Deransart, and M. Ducassé. A Generic Trace Schema for the Portability of CP(FD) Debugging Tools. In J. Vancza, K. Apt, F. Fages, F. Rossi, and P. Szeredi, editors, Recent advances in Constraint Programming, pages 171--195. Springer-Verlag, Lecture Notes in Artificial Intelligence 3010, 2004. [bibtex-key = langevine04] [bibtex-entry]


  7. Hervé Debar, Benjamin Morin, Frédéric Cuppens, Fabien Autrel, Ludovic Mé, Bernard Vivinis, Salem Benferhat, Mireille Ducassé, and Rodolphe Ortalo. Corrélation d'alertes en détection d'intrusions. Technique et Science Informatiques, 23(3):323--358, 2004. [bibtex-key = debar04] [bibtex-entry]


  8. M. Eluard and T. Jensen. Vérification du contrôle d'accès dans des cartes à puce multi-application. Technique et Science Informatiques, 23(3):323--358, 2004. [bibtex-key = Eluard:04:TSI] [bibtex-entry]


  9. G. Feuillade, T. Genet, and V. Viet Triem Tong. Reachability Analysis over Term Rewriting Systems. Journal of Automated Reasoning, 33 (3-4):341-383, 2004. [WWW ] Keyword(s): Term Rewriting, Reachability, Tree Automata, Approximations, Verification, Timbuk, Abstract Interpretation.
    Abstract:
    This paper surveys some techniques and tools for achieving reachability analysis over term rewriting systems. The core of those techniques is a generic tree automata completion algorithm used to compute in an exact or approximated way the set of descendants (or reachable terms). This algorithm has been implemented in the Timbuk tool. Furthermore, we show that many classes with regular sets of descendants of the literature corresponds to specific instances of the tree automata completion algorithm and can thus be efficiently computed by Timbuk. An extension of the completion algorithm to conditional term rewriting systems and some applications are also presented.
    [bibtex-key = FeuilladeGVTT-JAR04] [bibtex-entry]


  10. T. Jensen and J.-L. Lanet. Modélisation et vérification dans les cartes à puce. Revue d'électricité et de l'électronique, 6/7:89--94, 2004. [bibtex-key = Jensen:04:CartesAPuce] [bibtex-entry]


  11. Gerardo Schneider. Computing Invariance Kernels of Polygonal Hybrid Systems. Nordic Journal of Computing, 11(2):194--210, 2004. [POSTSCRIPT ] Keyword(s): hybrid system, differential inclusion, SPDI, invariance kernel, phase portrait. [bibtex-key = schneider04njc] [bibtex-entry]


  12. David Cachera, Thomas Jensen, David Pichardie, and Vlad Rusu. Extracting a data flow analyser in constructive logic. In Proc. ESOP'04, number 2986 of Springer LNCS, pages 385 -- 400, 2004. [bibtex-key = CachJen:esop04] [bibtex-entry]


  13. P. Fradet and S. Hong Tuan Ha. Network Fusion. In Wei-Ngan Chin, editor, Programming Languages and Systems: Second Asian Symposium, APLAS 2004, volume Springer LNCS vol. 3302, pages 21--40, 2004. [bibtex-key = Fradet:04:Fusion] [bibtex-entry]


  14. P. Giambiagi, G. Schneider, and F.D. Valencia. On the expressiveness of infinite behavior and name scoping in process calculi. In FOSSACS, volume 2987 of LNCS, pages 226--240, 2004. [POSTSCRIPT ] Keyword(s): process calculi, expressivity, name scoping, infinite behavior. [bibtex-key = GSV04eib] [bibtex-entry]


  15. L. Langevine and M. Ducassé. A tracer driver to enable debugging, monitoring and visualization of CLP executions from a single tracer. In B. Demoen and V. Lifschtitz, editors, Proceedings of the International Conference on Logic Programming, pages 462-463, September 2004. Lecture Notes in Computer Science 3132, Springer-Verlag. Note: Poster. [bibtex-key = langevine04c] [bibtex-entry]


  16. L. Langevine and M. Ducassé. Un pilote de traceur pour la PLC. Déboguer, auditer et visualiser une exécution avec un même traceur. In F. Mesnard, editor, Actes des Journées Francophones de Programmation en Logique avec Contraintes, pages 19-36, Juin 2004. HERMES Science Publications. [bibtex-key = langevine04b] [bibtex-entry]


  17. Matthieu Petit and Arnaud Gotlieb. An ongoing work on statistical structural testing via probabilistic concurrent constraint programming. In Proc. of SIVOES-MODEVA workshop, St Malo, France, November 2004. IEEE.
    Abstract:
    The use of a model to describe and test the expected behavior of a program is a well-proved software testing technique. Statistical structural testing aims at building a model from which an input probability distribution can be derived that maximizes the coverage of some structural criteria by a random test data generator. Our approach consists in converting statistical structural testing into a Probabilistic Concurrent Constraint Programming (PCCP) problem in order 1) to exploit the high declarativity of the probabilistic choice operators of this paradigm and 2) to benefit from its automated constraint solving capacity. This paper reports on an ongoing work to implement PCCP and exploit it to solve instances of statistical structural testing problems. Application to testing Java Card applets is discussed.
    [bibtex-key = PG04] [bibtex-entry]


  18. Matthieu Petit and Arnaud Gotlieb. Probabilistic choice operators as global constraints : application to statistical software testing. In Poster presentation in ICLP'04, number 3132 of Springer LNCS, pages 471 -- 472, 2004. [bibtex-key = PetitGotlieb:04:poster] [bibtex-entry]


  19. E. Tombini, H. Debar, L. Mé, and M. Ducassé. A serial combination of anomaly and misuse IDSes applied to HTTP traffic. In D. Thomsen and C. Schuba, editors, Proceedings of the Annual Computer Security Applications Conference, December 2004. [bibtex-key = tombini04] [bibtex-entry]


  20. The OADymPPaC Consortium. GENTRA4CP: Generic Trace Format for Constraint Programming. Livrable du projet RNTL OADymPPaC, INRIA, July 2004. Note: Http://contraintes.inria.fr/OADymPPaC. [bibtex-key = oadymppac04] [bibtex-entry]


  21. J.-P. Pouzol, S. Benferhat, H. Debar, M. Ducassé, E. Fayol, S. Gombault, J. Goubault-Larrecq, Y. Lavictoire, L. Mé, L. Noé, J. Olivain, E. Totel, and B. Vivinis. Rapport de synthèse sur la création de sondes de détection d'intrusions. Livrable du projet RNTL DICO, Juillet 2004. Note: 121 pages. [bibtex-key = dico-sp4-04] [bibtex-entry]


  22. Gerardo Schneider. A constraint-based algorithm for analysing memory usage on Java cards. Technical report RR-5440, INRIA, December 2004. [POSTSCRIPT ] Keyword(s): static analysis, memory analysis, constraint-based algorithm, Java card. [bibtex-key = schneider04cba] [bibtex-entry]


2003
  1. M. Ducassé, editor. Actes des Journées Francophones de Programmation en Logique avec Contraintes, June 2003. Revue des Sciences et Technologies de l'Information, Hors série/JFPLC 2003, HERMES Science Publications. [bibtex-key = ducasse03] [bibtex-entry]


  2. M. Huisman and T. Jensen, editors. J. Logic and Algebraic Programming. Speical issue on Formal Methods for Smart Cards, 2003. Elsevier. Note: \toappear. [bibtex-key = Huisman:03:JLAPSmartCards] [bibtex-entry]


  3. V. Viet Triem Tong. Automates d'arbres et réécriture pour l'étude de problèmes d'accessibilité. PhD thesis, Université Rennes 1, 2003. [bibtex-key = VietTriemTong-PhD03] [bibtex-entry]


  4. Rémi Douence, Pascal Fradet, and Mario Südholt. Trace-Based Aspects. In Mehmet Aksit et al., editor, Aspect-Oriented Software Development. Addison-Wesley, 2003. [bibtex-key = DSF-AW] [bibtex-entry]


  5. A. Banerjee and T. Jensen. Control-flow analysis with rank-2 intersection types. Mathematical Structures in Computer Science, 13(1):87--124, 2003. [bibtex-key = Banerjee:03:Rank2] [bibtex-entry]


  6. Rémi Douence and Pascal Fradet. The next 700 Krivine Machines. Higher-Order and Symbolic Computation, 2003. Note: \toappear. [bibtex-key = DF-HOSC] [bibtex-entry]


  7. F. Spoto and T. Jensen. Class Analyses as Abstract Interpretations of Trace Semantics. ACM Transactions on Programming Languages and Systems (TOPLAS), 25(5):578--630, 2003. [bibtex-key = Spoto:ClassAnalysis:03] [bibtex-entry]


  8. In M. Bruynooghe, editor, Proc. Logic-based Program Synthesis and Transformation, 2003. Springer-Verlag, LNCS vol. 269???. [bibtex-key = lopstr2003] [bibtex-entry]


  9. Jean-Pierre Banâtre, Pascal Fradet, and Yann Radenac. Higher-order chemistry. In Preproceedings of the Workshop on Membrane Computing, July 2003. [bibtex-key = BFR03] [bibtex-entry]


  10. F. Besson and T. Jensen. Modular Class Analysis with DATALOG. In R. Cousot, editor, Proc. of 10th Static Analysis Symposium (SAS 2003), pages 19--36, 2003. Springer LNCS vol. 2694. [WWW ]
    Abstract:
    datalog can be used to specify a variety of class analyses for object oriented programs as variations of a common framework. In this framework, the result of analyzing a class is a set of datalog clauses whose least fixpoint is the information analysed for. Modular class analysis of program fragments is then expressed as the resolution of open datalog programs. We provide a theory for the partial resolution of sets of open clauses and define a number of operators for reducing such open clauses.
    [bibtex-key = Besson:03:Modular] [bibtex-entry]


  11. David Cachera and Katell Morin-Allory. Verification of Control Properties in the Polyhedral Model. In Proc. 1st MEMOCODE conference, Mont-St-Michel, France, June 2003. [WWW ] [bibtex-key = CacMor03] [bibtex-entry]


  12. David Cachera and David Pichardie. Embedding of Systems of Affine Recurrence Equations in Coq. In Proc. TPHOLs 2003, 16th International Conference on Theorem Proving in Higher Order Logics, LNCS, Rome, Italy, September 2003. [WWW ] [bibtex-key = CacPic03] [bibtex-entry]


  13. P. Deransart, L. Langevine, and M. Ducassé. A Generic Trace Schema for the Portability of CP(FD) Debugging Tools. In Proceedings of the ERCIM workshop on Constraint and Logic Programming, 2003. [bibtex-key = deransart03] [bibtex-entry]


  14. M. Ducassé, L. Langevine, and P. Deransart. Rigorous design of tracers: an experiment for constraint logic programming. In M. Ronsse, editor, Proceedings of the Fifth International Workshop on Automated Debugging, September 2003. CoRR cs.SE/0309027. [bibtex-key = ducasse03b] [bibtex-entry]


  15. G. Feuillade and T. Genet. Reachability in conditional term rewriting systems. In FTP'2003, International Workshop on First-Order Theorem Proving, volume 86 n. 1 of Electronic Notes in Theoretical Computer Science, June 2003. Elsevier. [WWW ] Keyword(s): Term Rewriting, Conditional Term Rewriting, Reachability, Tree Automata. [bibtex-key = FeuilladeG-FTP03] [bibtex-entry]


  16. T. Genet, T. Jensen, V. Kodati, and D. Pichardie. A Java Card CAP Converter in PVS. In Proceedings of the 2nd International Workshop on Compiler Optimization Meets Compiler Verification (COCV 2003), 2003. ENTCS 82(2). [WWW ] Keyword(s): Java, Java Card, CAP Format, Compiler, Verification, Proof Assistant, PVS. [bibtex-key = GenetJKP-COCV03] [bibtex-entry]


  17. T. Genet, Y.-M. Tang-Talpin, and V. Viet Triem Tong. Verification of Copy Protection Cryptographic Protocol using Approximations of Term Rewriting Systems. In In Proceedings of Workshop on Issues in the Theory of Security, 2003. [WWW ] Keyword(s): Cryptographic Protocol, Verification, Term Rewriting, Reachability, Approximation, Timbuk. [bibtex-key = GenetTTVTT-wits03] [bibtex-entry]


  18. Arnaud Gotlieb. Exploiting Symmetries to Test Programs. In Proc. of 14th IEEE International Symposium on Software Reliability Engineering (ISSRE 2003), Denver, Colorado, USA, 2003. Note: 17th to 20th November. [bibtex-key = Gotlieb:03:SymTest] [bibtex-entry]


  19. A. Gotlieb and B. Botella. Automated Metamorphic Testing. In Proc. of the 27th IEEE Annual International Computer Software and Applications Conference (COMPSAC), Dallas, TX, USA, 2003. Note: 3th to 7th November. [bibtex-key = Gotlieb:03:MetaTest] [bibtex-entry]


  20. L. Langevine, M. Ducassé, and P. Deransart. A Propagation Tracer for Gnu-Prolog: from Formal Definition to Efficient Implementation. In C. Palamidessi, editor, Proceedings of the 19th Int. Conf. in Logic Programming, December 2003. Springer-Verlag, Lecture Notes in Computer Science. [bibtex-key = langevine03] [bibtex-entry]


  21. Lionel van Aertryck and Thomas Jensen. UML-CASTING: Test synthesis from UML models using constraint resolution. In J-M. Jézéquel, editor, Proc. Approches Formelles dans l'Assistance au Développement de Logiciels (AFADL'2003), 2003. INRIA. [bibtex-key = Aertryck:03:UMLCasting] [bibtex-entry]


  22. David Cachera and Katell Morin-Allory. Verification of Control Properties in the Polyhedral Model. Technical report 1515, INRIA, 2003. [WWW ] [bibtex-key = CacMorRR03] [bibtex-entry]


  23. David Cachera and David Pichardie. Proof Tactics for the Verification of Structured Systems of Affine Recurrence Equations. Technical report 1511, INRIA, 2003. [WWW ] [bibtex-key = CacPicRR03] [bibtex-entry]


  24. G. Feuillade, T. Genet, and V. Viet Triem Tong. Reachability Analysis over Term Rewriting Systems. Technical report RR-4970, INRIA, 2003. [WWW ] Keyword(s): Term Rewriting, Reachability, Tree Automata, Approximations, Verification, Timbuk, Abstract Interpretation. [bibtex-key = FeuilladeGVTT-RR03] [bibtex-entry]


  25. Jean Goubault-Larrecq, Stéphane Demri, Mireille Ducassé, Ludovic Mé, Julien Olivain, Claudine Picaronny, Jean-Philippe Pouzol, Eric Totel, and Bernard Vivinis. Algorithmes de détection et langages de signatures. Livrable 3.3, Projet RNTL DICO, http://dico.netsecuresoftware.com//index.htm, Octobre 2003. [bibtex-key = dico-sp3-03] [bibtex-entry]


  26. T. Genet. Timbuk 2.0 -- A Tree Automata Library -- Reference Manual and Tutorial. 2003. Note: 81 pages. \small http://www.irisa.fr/lande/genet/timbuk/. [WWW ] Keyword(s): Timbuk, Reference Manual, Tutorial, Term Rewriting, Reachability, Tree Automata, Approximations, Verification. [bibtex-key = Timbuk-manual] [bibtex-entry]


  27. P. Deransart, L. Langevine, and M. Ducassé. Debugging Constraint problems with Portable Tools. Demonstration presented at the 13th Workshop on Logic Programming Environments, December 2003. [bibtex-key = deransart03b] [bibtex-entry]


2002
  1. M. Ducassé, editor. Journal of Automated Software Engineering 9(1), special issue on automated debugging, janvier 2002. Kluwer. [bibtex-key = ducasse02] [bibtex-entry]


  2. F. Besson. Analyse modulaire de programmes. PhD thesis, Université de Rennes 1, 2002. [WWW ] [bibtex-key = Besson:02:PhD] [bibtex-entry]


  3. Siegfried Rouvrais. Utilisation d'agents mobiles pour la construction de services distribués. PhD thesis, Université de Rennes 1, juillet 2002. Note: N. d'ordre : 2614. [bibtex-key = Rouvrais02a] [bibtex-entry]


  4. T. Jensen. Types in program analysis. In Torben Mogensen, David Schmidt, and I. Hal Sudborough, editors, The Essence of Computation: Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones, Lecture Notes in Computer Science 2566, pages 204--222. Springer-Verlag, 2002. [bibtex-key = Jensen:02:Types] [bibtex-entry]


  5. E. Denney and T. Jensen. Correctness of Java Card method lookup via logical relations. Theoretical Computer Science, 283:305--331, 2002. [bibtex-key = Denney:02:Correctness] [bibtex-entry]


  6. E. Jahier and M. Ducassé. Generic Program Monitoring by Trace Analysis. Theory and Practice of Logic Programming, 2(4-5), September 2002. [bibtex-key = jahier02] [bibtex-entry]


  7. Siegfried Rouvrais. Construction de services distribués : une approche à base d'agents mobiles. TSI-HERMES / RSTI - TSI. Volume 21 - n° 7, pp 985-1007, 2002. [bibtex-key = Rouvrais02b] [bibtex-entry]


  8. F. Besson, Thomas de Grenier de Latour, and T. Jensen. Secure calling contexts for stack inspection. In Proc. of 4th Int Conf. on Principles and Practice of Declarative Programming (PPDP 2002), pages 76--87, 2002. ACM Press. [WWW ]
    Abstract:
    Stack inspection is a mechanism for programming secure applications by which a method can obtain information from the call stack about the code that (directly or indirectly) invoked it. This mechanism plays a fundamental role in the security architecture of Java and the .NET Common Language Runtime. A central problem with stack inspection is to determine to what extent the local checks inserted into the code are sufficient to guarantee that a global security property is enforced. In this paper, we present a technique for inferring a secure calling context for a method. By a secure calling context we mean a pre-condition on the call stack sufficient for guaranteeing that execution of the method will not violate a given global property. This is particularly useful for annotating library code in order to avoid having to re-analyse libraries for every new application. The technique is a constraint based static program analysis implemented via fixed point iteration over an abstract domain of linear temporal logic properties.
    [bibtex-key = Besson:02:CallingContexts] [bibtex-entry]


  9. P. Deransart, L. Langevine, and M. Ducassé. A Generic Trace Model for Finite Domain Solvers. In B. O'Sullivan, editor, Proc. of the International Workshop on User-Interaction in Constraint Satisfaction, September 2002. Cornell University. [bibtex-key = deransart02] [bibtex-entry]


  10. Rémi Douence, Pascal Fradet, and Mario Südholt. A framework for the detection and resolution of aspect interactions. In Proceedings of the ACM SIGPLAN/SIGSOFT Conference on Generative Programming and Component Engineering, 2002. Springer-Verlag, Lecture Notes in Computer Science 2487. [bibtex-key = DFS02a] [bibtex-entry]


  11. M. Ducassé and L. Langevine. Analyse automatisée de traces d'exécution de programmes CLP(FD). In M. Rueher, editor, Actes des Journées Francophones de Programmation en Logique avec Contraintes, pages 119-134, Mai 2002. HERMES science publications. [bibtex-key = ducasse02c] [bibtex-entry]


  12. M. Ducassé and L. Langevine. Automated analysis of CLP(FD) program execution traces. In P. Stuckey, editor, Proceedings of the International Conference on Logic Programming, July 2002. Lecture Notes in Computer Science 2401, Springer-Verlag. Note: Poster. Extended version available at http://www.irisa.fr/lande/ducasse/. [bibtex-key = ducasse02d] [bibtex-entry]


  13. Thomas Jensen, Florimond Ployette, and Olivier Ridoux. Iteration schemes for fixed point conputation. In A. Ingolfsdottir and Z. Esik, editors, Proc. of 4th Int workshop on Fixed Points in Computer Science (FICS'02), pages 69--76, 2002. [bibtex-key = Jensen:02:Iteration] [bibtex-entry]


  14. L. Langevine, P. Deransart, M. Ducassé, and E. Jahier. Prototypage de traceurs CLP(FD). In M. Rueher, editor, Actes des Journées Francophones de Programmation en Logique avec Contraintes, pages 135-150, Mai 2002. HERMES science publications. [bibtex-key = langevine02] [bibtex-entry]


  15. B. Morin, L. Mé, H. Debar, and M. Ducassé. M2D2 : A Formal Data Model for IDS Alert Correlation. In A. Wespi et al., editor, Recent Advances in Intrusion Detection, pages 97-104, October 2002. Springer-Verlag, Lecture Notes in Computer Science 2516. [bibtex-key = morin02] [bibtex-entry]


  16. Jean-Philippe Pouzol and Mireille Ducassé. Formal specification of intrusion signatures and detection rules. In S. Schneider, editor, Proc. of 15th IEEE Computer Security Foudations Workshop, pages 64--76, 2002. IEEE Press. [bibtex-key = pouzol02] [bibtex-entry]


  17. Marc Éluard and Thomas Jensen. Secure object flow analysis for Java Card. In Proc. of 5th Smart Card Research and Advanced Application Conference (Cardis'02), 2002. IFIP/USENIX. [bibtex-key = Eluard:02:SecureObjectFlow] [bibtex-entry]


  18. Rémi Douence, Pascal Fradet, and Mario Südholt. Detection and resolution of aspect interactions. Rapport de recherche 4435, Inria, Avril 2002. [bibtex-key = DFS02b] [bibtex-entry]


  19. T. Genet and V. Viet Triem Tong. Proving Negative Conjectures on Equational Theories using Induction and Abstract Interpretation. Technical report RR-4576, INRIA, 2002. [WWW ] Keyword(s): Equational theories, proof by induction, abstract interpretation, rewriting. [bibtex-key = GenetVTTong-RR02] [bibtex-entry]


  20. Castor : Dossier de définition & dossier de justification (Tome 3), septembre 2002. [bibtex-key = Castor02] [bibtex-entry]


2001
  1. Isabelle Attali and Thomas Jensen, editors. Smart Card Programming and Security (e-Smart 2001), September 2001. Springer LNCS vol. 2140. [bibtex-key = Attali:01:EsmartProceedings] [bibtex-entry]


  2. Marc Éluard. Analyse de sécurité pour la certification d'applications Java Card. PhD thesis, Université de Rennes 1, December 2001. Note: N. d'ordre : 2614. [bibtex-key = Eluard:01:PhD] [bibtex-entry]


  3. F. Besson, T. Jensen, D. Le Métayer, and T. Thorn. Model ckecking security properties of control flow graphs. Journal of Computer Security, 9:217--250, 2001. [WWW ]
    Abstract:
    A fundamental problem in software-based security is whether local security checks inserted into the code are sufficient to implement a global security property. This article introduces a formalism based on a linear-time temporal logic for specifying global security properties pertaining to the control flow of the program, and illustrates its expressive power with a number of existing properties. We define a minimalistic, security-dedicated program model that only contains procedure call and run-time security checks and propose an automatic method for verifying that an implementation using local security checks satisfies a global security property. We then show how to instantiate the framework to the security architecture of Java 2 based on stack inspection and privileged method calls.
    [bibtex-key = Besson:01:ModelChecking] [bibtex-entry]


  4. M. Ducassé and E. Jahier. Efficient Automated Trace Analysis: Examples with Morphine. Electronic Notes in Theoretical Computer Science, 55(2), 2001. Note: K. Havelund and G. Rosu (Eds), proceedings of the first Workshop on Runtime Verification. [bibtex-key = ducasse01] [bibtex-entry]


  5. Olivier Ridoux and Patrice Boizumault. Typed Static Analysis: Application to the Groundness Analysis of Typed Prolog. Journal of Functional and Logic Programming, 2001(4), July 2001. [bibtex-key = rb-jflp2001] [bibtex-entry]


  6. T. Genet and Valérie Viet Triem Tong. Reachability Analysis of Term Rewriting Systems with \sl Timbuk. In 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning, volume 2250 of Lectures Notes in Artificial Intelligence, pages 691--702, 2001. Springer Verlag. [WWW ] Keyword(s): Timbuk, Term Rewriting, Reachability, Tree Automata, Descendants, Verification.
    Abstract:
    We present Timbuk -- a tree automata library -- which implements usual operations on tree automata as well as a completion algorithm used to compute an over-approximation of the set of descendants R*(E) for a regular set E and a term rewriting system R, possibly non linear and non terminating. On several examples of term rewriting systems representing programs and systems to verify, we show how to use Timbuk to construct their approximations and then prove unreachability properties of these systems.
    [bibtex-key = GenetVTTong-LPAR01] [bibtex-entry]


  7. E. Jahier, M. Ducassé, and O. Ridoux. Specifying Prolog Trace Models with a Continuation Semantics. In K.-K. Lau, editor, Logic Based Program Synthesis and Transformation, 2001. Springer-Verlag, Lecture Notes in Computer Science 2042. [bibtex-key = jahier01b] [bibtex-entry]


  8. T. Jensen and F. Spoto. Class analysis of object-oriented programs through abstract interpretation. In F. Honsell and M. Miculan, editors, Proc. of Foundations of Software Science and Computation Structures (FoSSaCS'01), pages 261--275, 2001. Springer LNCS vol .2030. [bibtex-key = Jensen:01:ClassAnalysis] [bibtex-entry]


  9. L. Langevine, P. Deransart, M. Ducassé, and E. Jahier. Prototyping clp(fd) tracers: a trace model and an experimental validation environment. In T. Kusalik, editor, Proceedings of the 11th Workshop on Logic Programming Environments, 2001. Computer Research Repository, CS.PL/0111043. [bibtex-key = langevine01] [bibtex-entry]


  10. J.-P. Pouzol and M. Ducassé. From Declarative Signatures to Misuse IDS. In W. Lee, L. Mé, and A. Wespi, editors, Recent Advances in Intrusion Detection, Proceedings of the 4th International Symposium, pages 1-21, 2001. Springer-Verlag, Lecture Notes in Computer Science 2212. [bibtex-key = pouzol01] [bibtex-entry]


  11. Igor Siveroni, Thomas Jensen, and Marc Éluard. A Formal Specification of the Java Card Applet Firewall. In Hanne Riis Nielson, editor, Nordic Workshop on Secure IT-Systems, November 2001. [bibtex-key = Siveroni:01:Nordsec] [bibtex-entry]


  12. Marc Éluard, Thomas Jensen, and Ewen Denney. An Operational Semantics of the Java Card Firewall. In Isabelle Attali and Thomas Jensen, editors, Smart Card Programming and Security (e-Smart 2001, volume Springer LNCS vol. 2140, September 2001. [bibtex-key = Eluard:01:Esmart] [bibtex-entry]


  13. Rémi Douence, Pascal Fradet, and Mario Südholt. A study of aspect interactions. Publication interne 1422, IRISA, 2001. [bibtex-key = DFS01] [bibtex-entry]


  14. M. Ducassé and L. Rozé. Revisiting the ``Traffic lights'' B case study. Publication Interne 1424, IRISA, November 2001. [bibtex-key = ducasse01b] [bibtex-entry]


  15. Thomas Genet and Valérie Viet Triem Tong. Reachability Analysis of Term Rewriting Systems with \sl Timbuk (extended version). Technical Report RR-4266, INRIA, 2001. [WWW ]
    Abstract:
    We present Timbuk -- a tree automata library -- which implements usual operations on tree automata as well as a completion algorithm used to compute an over-approximation of the set of descendants R*(E) for a regular set E and a term rewriting system R, possibly non linear and non terminating. On several examples of term rewriting systems representing programs and systems to verify, we show how to use Timbuk to construct their approximations and then prove unreachability properties of these systems.
    [bibtex-key = GenetVTTong-RR01] [bibtex-entry]


  16. Siegfried Rouvrais. Construction de services distribués: une approche à base d'agents mobiles. Publication interne 1421, IRISA, novembre 2001. [bibtex-key = rouvrais01] [bibtex-entry]


  17. Castor : Dossier de définition & dossier de justification (Tome 2), juin 2001. [bibtex-key = Castor01] [bibtex-entry]


2000
  1. I. Attali and T. Jensen, editors. Proceedings of the International Workshop on Java Card (Java Card 2000), Cannes, France, Septembre 2000. Inria. [bibtex-key = Jensen:00:JavaCard] [bibtex-entry]


  2. M. Ducassé, editor. Proceedings of the 4th International Workshop on Automated Debugging (AADEBUG2000), Munich, August 2000. COrr. Note: Refereed proceedings to appear in the COmputer Research Repository (CORR). [WWW ] [bibtex-key = ducasse2000c] [bibtex-entry]


  3. Pascal Fradet. Approches langages pour la conception et la mise en oeuvre de programmes. document d'habilitation à diriger des recherches, Université de Rennes 1, novembre 2000. [bibtex-key = HDR-PF] [bibtex-entry]


  4. E. Jahier. Analyse dynamique d'exécutions : construction automatisée d'analyseurs performants et spécifications de modèles d'exécution. PhD thesis, INSA de Rennes, décembre 2000. [bibtex-key = jahier00d] [bibtex-entry]


  5. Michaël Périn. Spécifications graphiques multi-vues : formalisation et vérification de cohérence. PhD thesis, IFSIC, October 2000. [bibtex-key = ThesePerin] [bibtex-entry]


  6. J.-P. Banâtre, P. Fradet, and D. Le Métayer. Gamma and the chemical reaction model: fifteen years after. In C.S. Calude, Gh. Paun, G. Rozenberg, and A. Salomaa, editors, Multiset Processing. Springer-Verlag, 2000. [bibtex-key = Gamma15] [bibtex-entry]


  7. M. Ducassé and J. Noyé. Tracing Prolog programs by source instrumentation is efficient enough. Elsevier Journal of Logic Programming,, 43(2):157-172, May 2000. [WWW ] Keyword(s): Debugging, tracing, source to source transformation, benchmarking, Prolog..
    Abstract:
    Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is more portable, it benefits from many compiler optimizations, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The usual argument in favor of compiled code instrumentation is its supposed efficiency. We have compared the performance of two operational low-level Prolog tracers with source instrumentation. We have executed classical Prolog benchmark programs, collecting trace information without displaying it. On average, collecting trace information by program instrumentation is about as fast as using a low-level tracer in one case, and only twice slower in the other. This is a minor penalty to pay, compared to the advantages of the approach. To our knowledge, this is the first time that a quantitative comparison of both approaches is made for any programming language.
    [bibtex-key = ducasse2000] [bibtex-entry]


  8. P. Fradet and J. Mallet. Compilation of a Specialized Functional Language for Massively Parallel Computers. Journal of Functional Programming, 2000. [bibtex-key = JFP00] [bibtex-entry]


  9. V. Abily and M. Ducassé. Benchmarking a distributed intrusion detection system based on ASAX: Preliminary results. In H. Debar, editor, RAID 2000 (Recent Advances on Intrusion Detection), 2000. Note: Refereed extended abtract. [WWW ] [bibtex-key = abily2000] [bibtex-entry]


  10. Thomas Colcombet and Pascal Fradet. Enforcing Trace Properties by Program Transformation. In Conference record of the 27th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2000. [WWW ]
    Abstract:
    We propose an automatic method to enforce trace properties on programs. The programmer specifies the property separately from the program; a program transformer takes the program and the property and automatically produces another ``equivalent'' program satisfying the property. This separation of concerns makes the program easier to develop and maintain. Our approach is both static and dynamic. It integrates static analyses in order to avoid useless transformations. On the other hand, it never rejects programs but adds dynamic checks when necessary. An important challenge is to make this dynamic enforcement as inexpensive as possible. The most obvious application domain is the enforcement of security policies. In particular, a potential use of the method is the securization of mobile code upon receipt.
    [bibtex-key = POPL00:Colcombet-Fradet] [bibtex-entry]


  11. E. Denney and T. Jensen. Correctness of Java Card method lookup via logical relations. In Proc. of European Symp. on Programming (ESOP 2000), Lecture Notes in Computer Science, pages 104--118, 2000. Springer. [bibtex-key = Denney:00:Correctness] [bibtex-entry]


  12. M. Ducassé and L. Rozé. Proof obligations of the B formal method: Local proofs ensure global consistency. In A. Bossi, editor, LOgic-based Program Synthesis and TRansformation, pages 11-30, 2000. Springer-Verlag, Lecture Notes in Computer Science, 1817. [bibtex-key = ducasse2000b] [bibtex-entry]


  13. P. Fradet, V. Issarny, and S. Rouvrais. Analyzing non-functional properties of mobile agents. In Proc. of Fundamental Approaches to Software Engineering, FASE'00, Lecture Notes in Computer Science, march 2000. Springer-Verlag. [WWW ] Keyword(s): Mobile agents, RPC, performance, security.
    Abstract:
    The mobile agent technology is emerging as a useful new way of building large distributed systems. The advantages of mobile agents over the traditional client-server model are mainly non-functional. We believe that one of the reasons preventing the wide-spread use of mobile agents is that non-functional properties are not easily grasped by system designers. Selecting the right interactions to implement complex services is therefore a tricky task. n this paper, we tackle this problem by considering efficiency and security criteria. We propose a language-based framework for the specification and implementation of complex services built from interactions with primitive services. Mobile agents, {\sc Rpc}, remote evaluation, or any combination of these forms of interaction can be expressed in this framework. We show how to analyze (i.e. assess and compare) complex service implementations with respect to efficiency and security properties. This analysis provides guidelines to service designers, enabling them to systematically select and combine different types of protocols for the effective realization of interactions with primitive services.
    [bibtex-key = FASE00] [bibtex-entry]


  14. T. Genet and F. Klay. Rewriting for Cryptographic Protocol Verification. In Proceedings 17th International Conference on Automated Deduction, volume 1831 of Lecture Notes in Artificial Intelligence, 2000. Springer-Verlag. [WWW ] Keyword(s): Cryptographic Protocol, Verification, Term Rewriting, Reachability, Approximation, Timbuk.
    Abstract:
    On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.
    [bibtex-key = GenetKlay-CADE00] [bibtex-entry]


  15. E. Jahier. Collecting graphical abstract views of Mercury program executions. In M. Ducassé, editor, Proceedings of the International Workshop on Automated Debugging (AADEBUG2000), Munich, August 2000. Note: Refereed proceedings to appear in the COmputer Research Repository (CORR). [WWW ] [bibtex-key = jahier00c] [bibtex-entry]


  16. E. Jahier, M. Ducassé, and O. Ridoux. Specifying Prolog Trace Models with a Continuation Semantics. In K.-K. Lau, editor, Proc. of LOgic-based Program Synthesis and TRansformation, London, July 2000. Note: Technical Report Report Series, Department of Computer Science, University of Manchester, ISSN 1361-6161. Report number UMCS-00-6-1. [WWW ] [bibtex-key = jahier00] [bibtex-entry]


  17. E. Jahier, M. Ducassé, and O. Ridoux. Spécification de modèles de traces de programmes Prolog à l'aide d'une sémantique par continuation. In Touraivane, editor, Actes des Journées francophones de Programmation Logique et par Contraintes, Marseille, 2000. Hermès. [bibtex-key = jahier00b] [bibtex-entry]


  18. D. Le Métayer, V.-A. Nicolas, and O. Ridoux. Verification by testing for recursive program schemes. In A. Bossi, editor, LOgic-based Program Synthesis and TRansformation, 2000. Springer-Verlag, Lecture Notes in Computer Science, 1817. [bibtex-key = LOPSTR99] [bibtex-entry]


  19. J.-P. Pouzol and M. Ducassé. Handling Generic Intrusion Signatures is not Trivial. In H. Debar, editor, RAID 2000 (Recent Advances on Intrusion Detection), 2000. Note: Refereed extended abtract. [WWW ] [bibtex-key = pouzol2000] [bibtex-entry]


  20. Michaël Périn. Cohérence de spécifications multi-vues. In les actes de l'atelier Approches Formelles dans l'Assistance au Développement de Logiciels (AFADL'2000), Grenoble, France, January 2000. [bibtex-key = AFADL2000] [bibtex-entry]


  21. P. Fradet and J. Mallet. Compilation of a Specialized Functional Language for Massively Parallel Computers. Technical report 3894, INRIA, march 2000. [WWW ] Keyword(s): Skeletons, polytopes, data parallelism, cost analysis, program transformation.
    Abstract:
    We propose a parallel specialized language that ensures portable and cost-predictable implementations on parallel computers. The language is basically a first-order, recursion-less, strict functional language equipped with a collection of higher-order functions or skeletons. These skeletons apply on (nested) vectors and can be grouped in four classes: computation, reorganization, communication, and mask skeletons. The compilation process is described as a series of transformations and analyses leading to \SPMD-like functional programs which can be directly translated into real parallel code. The language restrictions enforce a programming discipline whose benefit is to allow a static, symbolic, and accurate cost analysis. The parallel cost takes into account both load balancing and communications, and can be statically evaluated even when the actual size of vectors or the number of processors are unknown. It is used to automatically select the best data distribution among a set of standard distributions. Interestingly, this work can be seen as a cross fertilization between techniques developed within the FORTRAN parallelization, skeleton, and functional programming communities.
    [bibtex-key = RR3894] [bibtex-entry]


  22. T. Genet and F. Klay. Rewriting for Cryptographic Protocol Verification (extended version). Technical Report 3921, INRIA, 2000. [WWW ]
    Abstract:
    On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.
    [bibtex-key = GenetK-CNET99] [bibtex-entry]


1999
  1. M. Ducassé, A. Kusalik, and G. Puebla, editors. Proceedings of the 10th Workshop on Logic Programming Environments, http://www.cs.usask.ca/projects/envlop/WLPE/10WLPE/, novembre 1999. [bibtex-key = ducasse99g] [bibtex-entry]


  2. Thomas Jensen. Analyse statiques de programmes : fondements et applications. document d'habilitation à diriger des recherches, Université de Rennes 1, décembre 1999. [bibtex-key = Jensen:99:Hab] [bibtex-entry]


  3. S. Mallet. Explications dans les bases de données déductives : Associer trace et sémantique. PhD thesis, Insa de Rennes, Irisa, novembre 1999. [bibtex-key = smallet99d] [bibtex-entry]


  4. T. Thorn. Vérification de politiques de sécurité par analyse de programmes. PhD thesis, Université de Rennes I, Ifsic, Irisa, février 1999. [bibtex-key = TT99] [bibtex-entry]


  5. M. Ducassé. Abstract views of Prolog executions with Opium. In P. Brna, B. du Boulay, and H. Pain, editors, Learning to Build and Comprehend Complex Information Structures: Prolog as a Case Study, Cognitive Science and Technology. Ablex, 1999. [WWW ] Keyword(s): Software engineering, Programming environment, Automated debugging, Trace abstraction mechanisms, Debugging language, Program behavior understanding, Prolog Debugging tool.
    Abstract:
    Opium is a system for analysing and debugging Prolog programs. Its kernel comprises an execution tracer and a programming language with a set of primitives for trace and source analysis. In this chapter we show the power of Opium for supporting abstract views of Prolog executions. Abstract views give high-level points of view about executions. They filter out irrelevant details; they restructure the remaining information; and they compact it so that the amount of information given at each step has a reasonable size. The examples of abstract views given in the following are a goal execution profile, some data abstractions, an instantiation profile, a failure analysis, a loop analysis, and a kind of explanation for an expert system written in Prolog.
    [bibtex-key = ducasse99b] [bibtex-entry]


  6. C. Belleannée, P. Brisset, and O. Ridoux. A Pragmatic Reconstruction of $\lambda$Prolog. J. Logic Programming, 41(1):67--102, 1999. [bibtex-key = BEL99] [bibtex-entry]


  7. M. Ducassé. Opium: An extendable trace analyser for Prolog. The Journal of Logic programming, 1999. Note: Special issue on Synthesis, Transformation and Analysis of Logic Programs, A. Bossi and Y. Deville (eds), Also Rapport de recherche INRIA RR-3257 and Publication Interne IRISA PI-1127. [WWW ] Keyword(s): Software Engineering, Automated Debugging, Trace Query Language, Program Execution Analysis, Abstract Views of Program Executions, Prolog.
    Abstract:
    Traces of program executions are a helpful source of information for program debugging. They, however, give a picture of program executions at such a low level that users often have difficulties to interpret the information. Opium, our extendable trace analyzer, is connected to a ``standard'' Prolog tracer. Opium is programmable and extendable. It provides a trace query language and abstract views of executions. Users can therefore examine program executions at the levels of abstraction which suit them. Opium has shown its capabilities to build abstract tracers and automated debugging facilities. This article describes in depth the trace query mechanism, from the model to its implementation. Characteristic examples are detailed. Extensions written so far on top of the trace query mechanism are listed. Two recent extensions are presented: the abstract tracers for the LO (Linear Objects) and the CHR (Constraint Handling Rules) languages. These two extensions were specified and implemented within a few days. They show how to use Opium for real applications.
    [bibtex-key = ducasse99] [bibtex-entry]


  8. V. Gouranton and D. Le Métayer. Dynamic slicing: a generic analysis based on a natural semantics format. Journal of Logic and Computation, 9(6):835-871, December 1999.
    Abstract:
    Slicing analyses have been proposed for different programming languages. Rather than defining a new analysis from scratch for each programming language, we would like to specify such an analysis once for all, in a language-independent way, and then specialise it for different programming languages. In order to achieve this goal, we propose a notion of natural semantics format and a dynamic slicing analysis format. The natural semantics format formalises a class of natural semantics and the analysis format is a generic, language-independent, slicing analysis. The correctness of the generic analysis is established as a relation between the derivation trees of the original program and the slice. This generic analysis is then instantiated to several programming languages conforming the semantics format (an imperative language, a logic programming language and a functional language), yielding a dynamic slicing analyser for each of these languages.
    [bibtex-key = Gouranton:99:DynamicSlicing] [bibtex-entry]


  9. D. Mentré, D. Le Métayer, and T. Priol. Conception de protocoles de cohérence de MVP par traduction d'une spécification Gamma à l'aide d'aspects. Calculateurs Parallèles, 11(2), 1999. [bibtex-key = CP99] [bibtex-entry]


  10. D. Le Métayer. Foreword to the special issue on coordination and mobility. Theoretical Computer Science, 240(1), 1999. [bibtex-key = TCS99] [bibtex-entry]


  11. F. Besson, T. Jensen, and J.P. Talpin. Polyhedral Analysis for Synchronous Languages. In Gilberto Filé, editor, International Static Analysis Symposium, SAS'99, volume 1694 of Lecture Notes in Computer Science, September 1999. Springer-Verlag. Keyword(s): synchronous languages, abstract interpretation, reachability analysis, infinite state systems.
    Abstract:
    We define an operational semantics for the Signal language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.
    [bibtex-key = SAS99:BJT] [bibtex-entry]


  12. M. Ducassé. An introduction to the B formal method. In A.-L. Bossi, editor, Proceedings of the 9th International Workshop on LOgic-based Program Synthesis and TRansformation, pages 23--30, September 1999. Universita' Ca' Foscari di Venezia. Note: Technical report CS-99-16, Slides. [bibtex-key = ducasse99d] [bibtex-entry]


  13. M. Ducassé. Coca: An automated Debugger for C. In Proceedings of the 21st International Conference on Software Engineering, pages 504-513, May 1999. ACM Press. [WWW ] Keyword(s): Software engineering, Programming environment, Automated debugging, Trace query mechanism, Debugging language, Program behavior understanding, C Debugging tool..
    Abstract:
    We present Coca, an automated debugger for C, where the breakpoint mechanism is based on events related to language constructs. Events have semantics whereas source lines used by most debuggers do not have any. A trace is a sequence of events. It can be seen as an ordered relation in a database. Users can specify precisely which events they want to see by specifying values for event attributes. At each event, visible variables can be queried. The trace query language is Prolog with a handful of primitives. The trace query mechanism searches through the execution traces using both control flow and data whereas debuggers usually search according to either control flow or data. As opposed to fully ``relational'' debuggers which use plain database querying mechanisms, Coca trace querying mechanism does not require any storage. The analysis is done on the fly, synchronously with the traced execution. Coca is therefore more powerful than ``source line'' debuggers and more efficient than relational debuggers.
    Annotation:
    Also RR-3489
    [bibtex-key = ducasse99c] [bibtex-entry]


  14. P. Fradet, D. Le Métayer, and M. Périn. Consistency checking for multiple view software architectures. In Proceedings of the joint 7th European Software Engineering Conference (ESEC) and 7th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE-7), LNCS, Toulouse, France, September 1999. Springer-Verlag. [WWW ] Keyword(s): software architecture, multiple views, UML diagrams, formal semantics, families of graphs, static consistency verification.
    Abstract:
    Consistency is a major issue that must be properly addressed when considering multiple view architectures. In this paper, we provide a formal definition of views expressed graphically using diagrams with multiplicities and propose a simple algorithm to check the consistency of diagrams. We also put forward a simple language of constraints to express more precise (intra-view and inter-view) consistency requirements. We sketch a complete decision procedure to decide whether diagrams satisfy a given constraint expressed in this language. Our framework is illustrated with excerpts of a case study: the specification of the architecture of a train control system.
    [bibtex-key = Fradet:FSE99:ConsistencyChecking] [bibtex-entry]


  15. P. Fradet and M. Südholt. An aspect language for robust programming. In Workshop on Aspect-Oriented Programming, ECOOP 1999, juillet 1999. [WWW ] Keyword(s): aspect-oriented programming, robustness, exceptions, program transformation, program analysis.
    Abstract:
    In this position paper, we advocate the use of an aspect language for robust programming. AOP is particularly appealing for this task because robustness crosscuts traditional structuring means. Consider, for instance, the problem of ensuring that a global index remains in a given range. The code needed to check such an invariant is typically scattered all over the program. The paper presents an example-driven introduction of the proposed aspect language for program robustness; it then discusses its semantics and implementation and suggests extensions.
    [bibtex-key = AOP99] [bibtex-entry]


  16. E. Jahier and M. Ducassé. A generic approach to monitor program executions. In D. De Schreye, editor, Proceedings of the International Conference on Logic Programming, November 1999. MIT Press. [WWW ] Keyword(s): Monitoring, Trace analysis, Flexibility, Logic programming, Mercury.
    Abstract:
    Monitoring requires to gather data about executions. The monitoring functionalities currently available are built on top of ad hoc instrumentations. Most of them are implemented at low-level; in any case they require an in-depth knowledge of the system to instrument. The best people to implement these instrumentations are generally the implementors of the compiler. They, however, cannot decide which data to gather. Indeed, hundreds of variants can be useful and only end-users know what they want. In this article, we propose a primitive which enables users to easily specify what to monitor. It is built on top of the tracer of the Mercury compiler. We illustrate how to use this primitive on two different kinds of monitoring. Firstly, we implement monitors that collect various kinds of statistics; each of them is well-known, the novelty is that users can get exactly the variants they need. Secondly, we define two notions of test coverage for logic programs and show how to measure coverage rates with our primitive. To our knowledge no definition of test coverage exist for logic programming so far. Each example is only a few lines of Mercury. Measurements show that the performance of the primitive on the above examples is acceptable for an execution of several millions of trace events. Our primitive, although simple, lays the foundation for a generic and powerful monitoring environment.
    [bibtex-key = jahier99d] [bibtex-entry]


  17. E. Jahier and M. Ducassé. Un traceur d'exécutions de programmes ne sert pas qu'au débogage. In F. Fages, editor, Actes des Journées francophones de Programmation Logique et par Contraintes, Lyon, juin 1999. Hermès. [WWW ] Keyword(s): Dynamic analysis, Trace analysis, Monitoring, Measure of test coverage, Logic programming, Mercury.
    Abstract:
    In this paper we show that a tracer with a trace analyser can be used to achieve more than debugging. We first illustrate how to compute coverage ratios for test cases. We also give 4 examples to monitor the behavior of programs. Thus, instead of building ad hoc instrumentations, which is currently the case for such tools, one can use a uniform environment which allows a synergy between the tools to take place. As a matter of fact, while studying the test coverage measurement we enriched the trace information, to the benefit of the other tools. Moreover, ad hoc instrumentations require an in depth knowledge of the system to instrument, either at low level or by source to source transformation. Even it is not technically difficult, it always requires a significant programming effort. On the opposite, with our approach, the instrumentation is generic. It is done once for all, and the specific analyses can be relatively simple. The examples of this article consist of less than a dozen of Prolog lines each.
    [bibtex-key = jahier99b] [bibtex-entry]


  18. E. Jahier, M. Ducassé, and O. Ridoux. Specifying trace models with a continuation semantics. In M. Ducassé, A. Kusalik, L. Naish, and G. Puebla, editors, Proc. of ICLP'99 Workshop on Logic Programming Environments, 1999. Note: LPE'99. [WWW ] Keyword(s): trace models, continuation semantics, specification, validation, Logic programming.
    Abstract:
    In this article, we have give a formal specification of Byrd's box model and we show how this specification can be extended to specify richer trace models. We have also sho how these specifications can be executed by a direct translation into lambda-Prolog, leading to a Prolog interpreter that performs execution traces. This interpreter can be used both to experiment various trace models and to validate the different event specifications. Hence we have a formal framework to specify and prototype trace models.
    [bibtex-key = jahier99e] [bibtex-entry]


  19. T. Jensen, D. Le Métayer, and T. Thorn. Verification of control flow based security properties. In Proc. of the 20th IEEE Symp. on Security and Privacy, pages 89--103, mai 1999. New York: IEEE Computer Society. [bibtex-key = SSP99] [bibtex-entry]


  20. S. Mallet and M. Ducassé. Generating deductive database explanations. In D. De Schreye, editor, Proceedings of the International Conference on Logic Programming, November 1999. MIT Press. [WWW ] Keyword(s): deductive databases, debugging, trace, operational semantics, multi-SLD-AL, meta-interpreter, substitution set, instrumentation.
    Abstract:
    Existing explanation systems for deductive databases show forests of proof trees. Although proof trees are often useful, they are only one possible interesting representation. We argue that an explanation system for deductive databases must be able to generate explanations at several levels of abstraction. One possible and well known technique to achieve this flexibility is to instrument meta-interpreters. It is, however, not often used because of its inefficiency. On the other hand, deductive databases often generate intermediate information stored in the physical database. This information can be considered as a low-level trace giving a faithful picture of what has happened at the relational level. The deductive reasoning is lost but can be very easily recovered by a meta-interpreter. In this article we describe a technique to generate explanations by integrating a relational trace and an instrumented meta-interpreter. The expensive aspects of meta-interpretation are reduced by the use of the trace which avoids many costly calculations. The flexibility of meta-interpretation is preserved, as illustrated by the generation of three different kinds of explanations: a box-oriented trace, a multi-SLD-AL tree and abstract AND trees. This technique enables powerful explanation systems to be implemented with very few modifications of the deductive database mechanism itself.
    [bibtex-key = sm-iclp99] [bibtex-entry]


  21. S. Mallet and M. Ducassé. Myrtle: A set-oriented meta-interpreter driven by a ``relational'' trace for deductive database debugging. In P. Flener, editor, LOgic-based Program Synthesis and TRansformation, pages 328-330, 1999. Springer-Verlag, LNCS 1559. Note: Résumé, version complète en RR-3598 INRIA. [bibtex-key = smallet99b] [bibtex-entry]


  22. D. Mentré, D. Le Métayer, and T. Priol. Towards designing SVM coherence protocols using high-level specifications and aspect-oriented translations. In Proceedings of ICS'99 Workshop on Software Distributed Shared Memory, juin 1999. [bibtex-key = WSDSM99] [bibtex-entry]


  23. D. Le Métayer, V.-A. Nicolas, and O. Ridoux. Verification by testing for recursive program schemes. In LOPSTR'99 (International Workshop on Logic Program Synthesis and Transformation), 1999. Springer-Verlag, LNCS. Note: To appear. [WWW ] Keyword(s): Software engineering, program verification, white-box testing, automated test data generation, program analysis, program schemes.
    Abstract:
    In this paper, we explore the testing-verification relationship with the objective of mechanizing the generation of test data. We consider program classes defined as recursive program schemes and we show that complete and finite test data sets can be associated with such classes, that is to say that these test data sets allow us to distinguish every two different functions in these schemes. This technique is applied to the verification of simple properties of programs.
    [bibtex-key = dlm-van-or-lopstr99] [bibtex-entry]


  24. Olivier Ridoux, Patrice Boizumault, and Frédéric Malésieux. Typed static analysis: application to groundness analysis of LambdaProlog and Prolog. In A. Middeldorp and T. Sato, editors, Proc. of the Int. Symp. on Functional and Logic Programming, 1999. Springer. [bibtex-key = RID99] [bibtex-entry]


  25. S. Mallet and M. Ducassé. Myrtle: A set-oriented meta-interpreter driven by a ``relational'' trace for deductive database debugging. Research Report RR-3598, INRIA, January 1999. [WWW ] Keyword(s): deductive databases, debugging, trace, multi-SLD-AL, meta-interpreter, substitution set, instrumentation.
    Abstract:
    Deductive databases manage large quantities of data and, in general, in a set-oriented way. The existing systems of explanation for deductive databases do not take these constraints into account. We propose a tracing technique which consists of integrating a "relational" trace and an instrumented meta-interpreter using substitution sets. The relational trace efficiently gives precise information about data extraction from the relational database. The meta-interpreter manages substitution sets and gives explanation on the deduction. The expensive aspects of meta-interpretation are reduced by the use of the trace which avoids many calculations. The flexibility of meta-interpretation is preserved. It allows different profiles of trace to be easily produced.
    [bibtex-key = sm-rr99] [bibtex-entry]


  26. M. Ducassé and E. Jahier. An automated debugger for Mercury - Opium-M 0.1 User and reference manuals. mai 1999. Note: RT-231 INRIA (aussi PI-1234 IRISA). [bibtex-key = ducasse99e] [bibtex-entry]


  27. E. Jahier and M. Ducassé. Opium-M 0.1 User and Reference Manuals. IRISA, Rennes, March 1999. [WWW ] Keyword(s): Logic programming, Mercury, Trace analyser, Trace query language, Automated debugging, User manual, Reference manual.
    Abstract:
    This document gathers the user manual and the reference manual of Opium-M, an analyser of execution traces of Mercury Programs. Opium-M is an adaptation to Mercury of Opium a trace analyser for Prolog. Mercury is a new logic programming language. Its type, mode and determinism declarations enable codes to be generated that is at the same time more efficient and more reliable than with current logic programming languages. The deterministic parts of Mercury programs are as efficient as their C counterparts. Moreover, numerous mistakes are detected at compilation time. However, our industrial partner experience shows that the fewer remaining mistakes, the harder they are to be diagnosed. A high-level debugging tool was thus necessary. Program execution traces given by traditional debuggers provide programmers with useful pieces of information. However, using them requires to analyse by hand huge amounts of information. Opium-M is connected to the traditional tracer of Mercury, it allows execution trace analyses to be automated. It provides a relational trace query language based on Prolog which enables users to specify precisely what they want to see in the trace. Opium-M, then, automatically filters out information irrelevant for the users.
    [bibtex-key = jahier99] [bibtex-entry]


  28. E. Denney, P. Fradet, C. Goire, T. Jensen, and D. Le Métayer. Procédé de vérification de transformateurs de codes pour un système embarqué, notamment sur une carte à puce, juillet 1999. Note: Brevet d'invention. [bibtex-key = brevet99] [bibtex-entry]


1998
  1. J. Mallet. Compilation d'un langage spécialisé pour machine massivement parallèle. PhD thesis, Université de Rennes I, Ifsic, Irisa, 1998. [WWW ] Keyword(s): parallelism, compilation, specialized language, program skeleton, data distribution, program transformation, cost analysis.
    Abstract:
    The programming languages used for the parallel computers are either efficient languages with explicit parallelism but no portable and very complex to use, either simple portable languages but their compilation is complex and relatively inefficient. We propose a specialized language based on program skeletons encapsulating data and control flow for which an accurate cost analysis of the parallel implementation exists. The compilation process deals with the automatic choice of the data distributions on the processors through the accurate cost guaranteed by the source language. This allows to obtain an automatic compilation with an efficient parallel code (the distributions representing a global choice of parallel implementation). The compilation process is described as a series of program transformations, each transformation mapping one intermediate skeleton-based language into another. The target language is an SPMD-like skeleton-based language straightforwardly translating into a sequential language with calls to communication library routines. The main compilation steps are : the size analysis, the in-place updating transformation, the explicitation of the communications and the data distributions choice.
    [bibtex-key = Mallet98c] [bibtex-entry]


  2. V.-A. Nicolas. Preuves de propriétés de classes de programmes par dérivation systématique de jeux de test. PhD thesis, Université de Rennes I, Ifsic, Irisa, December 1998. [WWW ] Keyword(s): Software engineering, program verification, white-box testing, automated test data generation, program analysis, program schemes.
    Abstract:
    The problem addressed in this thesis is the automatic generation of test data sets enabling the proof of program properties. We thus place ourselves half-way between the domain of software testing and that of program verification. The work on software testing has lead to easy-to-use semi-automatic tools, but which rely on hypothesis difficult to verify in practice. In the domain of verification, some tools based on formal methods have been developed but they often require an user very experienced with the proof techniques used by the tool. This fact is due to indecidability problems generated by the Turing completeness of the formalisms treated. Our primary contribution is a new approach to program verification, combining the techniques of software testing and static analysis. We propose a formal method for the generation of finite test data sets, allowing one to prove that a program satisfies a given property. This method uses the program source and the property, which must belong to certain class of programs (or properties). These classes are represented by hierarchies of program schemes, which can be seen as modeling some test hypothesis. Every program belonging to one of our schemes and passing the generated test data set satisfy the tested property. For a given property our method is completely automatic and thus does not require any special competence of the user. We have implemented this method in a prototype (for a restricted functional language), for properties expressible in terms of list length.
    [bibtex-key = Nicolas-these98] [bibtex-entry]


  3. R. Douence and P. Fradet. A systematic study of functional language implementations. ACM Transactions on Programming Languages and Systems, 20(2):344--387, 1998. [WWW ]
    Abstract:
    We introduce a unified framework to describe, relate, compare and classify functional lan guage implementations. The compilation process is expressed as a succession of program transforma tions in the common framework. At each step, different transformations model fundamental choices. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. This approach also paves the way to formal comparisons by making it possible to estimate the complexity of individual transformations or compositions of them. Our study aims at cov ering the whole known design space of sequential functional languages implementations. In particular, we consider call-by-value, call-by-name and call-by-need reduction strategies as well as environment and graph-based implementations. We describe for each compilation step the diverse alternatives as program transformations. In some cases, we illustrate how to compare or relate compilation tech niques, express global optimizations or hybrid implementations. We also provide a classification of well-known abstract machines.
    [bibtex-key = TOPLAS98] [bibtex-entry]


  4. P. Fradet and D. Le Métayer. Structured Gamma. Science of Computer Programming, 31(2-3):263--289, 1998. [WWW ]
    Abstract:
    The Gamma language is based on the chemical reaction metaphor which has a number of benefits with respect to parallelism and program derivation. But the original definition of Gamma does not provide any facility for data structuring or for specifying particular control strategies. We address this issue by introducing a notion of {\em structured multiset} which is a set of addresses satisfying specific relations. The relations can be seen as a form of neighborhood between the molecules of the solution; they can be used in the reaction condition of a program or transformed by the action. A type is defined by a context-free graph grammar and a structured multiset belongs to a type $T$ if its underlying set of addresses satisfies the {invariant} expressed by the grammar defining $T$. We define a type checking algorithm that allows us to prove mechanically that a program maintains its data structure invariant. We illustrate the significance of the approach for program refinement and we describe its application to coordination.
    [bibtex-key = SCP98] [bibtex-entry]


  5. D. Le Métayer, V.-A. Nicolas, and O. Ridoux. Programs, Properties, and Data: Exploring the Software Development Trilogy. IEEE Software, 15(6):75-81, November/December 1998. [WWW ] Keyword(s): Software engineering, testing, verification, program analysis, program learning.
    Abstract:
    Software development usually involves a collection of properties, programs and data as input or output documents. Putting these three kinds of documents at the vertices of a triangle, one sees that all three sides of the triangle have been exploited in formal methods, and that they have often been used in both directions. However, richer combinations have seldom been envisaged, and formal methods often amount to a strict orientation of the figure by imposing functional dependencies (e.g.,~infering test cases from specifications). Moreover, undecidability problems arise when properties are expressed in full predicate logic (or similar formalisms) or programs are written in Turing-equivalent programming languages. We advocate that (1) formal methods should provide more flexible ways to exploit the developer's knowledge and offer a variety of possibilities to construct programs, properties and test data and (2) it is worth restricting the power of logic formalisms and programming languages for the benefit of mechanization. We go one step in this direction, and present a formal method for generating test cases that combines techniques from abstract interpretation ({\em program -> property}) and testing ({\em program+property -> test data}), and takes inspiration from automated learning (test generation via a {\em testing bias}). The crucial property of the test suites generated this way is that they are robust with respect to a test objective formalized as a property. In other words, if a program passes the test suite, then it is guaranteed to satisfy the property. As this process leads to decision problems in very restricted formalisms, it can be fully mechanized.
    [bibtex-key = dlm-van-or-IEEEsoftware98] [bibtex-entry]


  6. M. Ducassé. Teaching B at a Technical University is Possible and Rewarding. In H. Habrias and S. E. Dunn, editors, B'98, Proceedings of the Educational Session, avril 1998. Association de Pilotage des Conférences B, Nantes. Note: ISBN: 2-9512461-0-2. [WWW ] Keyword(s): B formal method, teaching.
    Abstract:
    In January 1994, to replace a highly unpopular denotational semantics course, I undertook to set up a course on the B method at the INSA of Rennes (Institut National des Sciences Appliquées), at a Bac+4 level. I had almost no previous knowledge of formal methods. I had, however, programmed much in Prolog and felt the need for a strong programming discipline, supported if possible by methods and tools. The experience is, in my opinion, successful. The students do learn much during the course, find interesting placements where their competence is appreciated and every occurrence of the course teaches me something. In the article, I first list reasons to start the experience. I then discuss the pedagogical objectives of the course. The contents of the course is given and an assessment is made.
    [bibtex-key = ducasse98c] [bibtex-entry]


  7. M. Ducassé and J. Noyé. Tracing Prolog Programs by Source Instrumentation is Efficient Enough. In K. Sagonas, editor, IJCSLP'98 Post-conference workshop on Implementation Technologies for Programming Languages based on Logic., June 1998. [WWW ] Keyword(s): Debugging, tracing, source to source transformation, benchmarking, Prolog.
    Abstract:
    Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is more portable, it benefits from many compiler optimizations, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The usual argument in favor of compiled code instrumentation is its supposed efficiency. We have compared the performance of two operational low-level Prolog tracers with source instrumentation. We have executed classical Prolog benchmark programs, collecting trace information without displaying it. On average, collecting trace information by program instrumentation is about as fast as using a low-level tracer in one case, and only twice slower in the other. This is a minor penalty to pay, compared to the advantages of the approach. To our knowledge, this is the first time that a quantitative comparison of both approaches is made for any programming language.
    [bibtex-key = ducasse98b] [bibtex-entry]


  8. P. Fradet and M. Südholt. Towards a Generic Framework for Aspect-Oriented Programming. In Third AOP Workshop, ECOOP'98 Workshop Reader, volume 1543 of LNCS, pages 394--397, juillet 1998. Springer-Verlag. [WWW ] Keyword(s): aspect-oriented programming, program transformation, program analysis.
    Abstract:
    What exactly are aspects? How to weave? What are the join points used to anchor aspects into the component program? Is there a general purpose aspect language? In this position paper, we address these questions for a particular class of aspects: aspects expressible as static, source-to-source program transformations. An aspect is defined as a collection of program transformations acting on the abstract syntax tree of the component program. We discuss the design of a generic framework to express these transformations as well as a generic weaver. The coupling of nent and aspect definitions can be defined formally using operators matching subtrees of the component program. The aspect weaver is simply a fixpoint operator taking as parameters the component program and a set of program transformations. In many cases, program transformations based solely on syntactic criteria are not satisfactory and one would like to be able to use semantic criteria in aspect definitions. We show how this can be done using properties expressed on the semantics of the component program and implemented using static analysis techniques. One of our main concerns is to keep weaving predictable. This raises several questions about the semantics (termination, convergence) of weaving.
    [bibtex-key = AOP98] [bibtex-entry]


  9. V. Gouranton. Deriving analysers by folding/unfolding of natural semantics and a case study: slicing. In International Static Analysis Symposium, SAS'98, number 1503 of Lecture Notes in Computer Science, Pise, Italie, pages 115-133, September 1998. Springer-Verlag. [WWW ] Keyword(s): systematic derivation, program transformation, natural semantics, proof tree, slicing analysis, logic programming language..
    Abstract:
    We consider specifications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property defined by recurrence on derivation trees. A recursive definition of a dynamic analyser can be obtained by fold/unfold program transformation combined with deforestation. A static analyser can then be derived by abstract interpretation of the dynamic analyser. We apply our framework to the derivation of a dynamic backward slicing analysis for a logic programming language.
    [bibtex-key = Gouranton:98:DerivingSAS] [bibtex-entry]


  10. T. Jensen. Inference of polymorphic and conditional strictness properties. In Proc. of 25th ACM Symposium on Principles of Programming Languages, pages 209--221, 1998. ACM Press. [WWW ]
    Abstract:
    We define an inference system for modular strictness analysis of functional programs by extending a conjunctive strictness logic with polymorphic and conditional properties. This extended set of properties is used to define a syntax-directed, polymorphic strictness analysis based on polymorphic recursion whose soundness is established via a translation from the polymorphic system into the conjunctive system. From the polymorphic analysis, an inference algorithm based on constraint resolution is derived and shown complete for variant of the polymorphic analysis. The algorithm deduces at the same time a property and a set of hypotheses on the free variables of an expression which makes it suitable for analysis of program with module structure.
    [bibtex-key = Jensen:98:Inference] [bibtex-entry]


  11. T. Jensen, D. Le Métayer, and T. Thorn. Security and Dynamic Class Loading in Java: A Formalisation. In Proceedings of the 1998 IEEE International Conference on Computer Languages, pages 4--15, May 1998. [WWW ]
    Abstract:
    We give a formal specification of the dynamic loading of classes in the Java Virtual Machine (JVM) and of the visibility of members of the loaded classes. This specification is obtained by identifying the part of the run-time state of the JVM that is relevant for dynamic loading and visibility and consists of a set of inference rules defining abstract operations for loading, linking and verification of classes. The formalisation of visibility includes an axiomatisation of the rules for membership of a class under inheritance, and of accessibility of a member in the presence of accessibility modifiers such as exttt{private} and exttt{protected}. The contribution of the formalisation is twofold. First, it provides a clear and concise description of the loading process and the rules for member visibility compared to the informal definitions of the Java language and the JVM. Second, it is sufficiently simple to allow calculations of the effects of load operations in the JVM.
    [bibtex-key = tjdlmtt-iccl98] [bibtex-entry]


  12. J. Mallet. Symbolic Cost Analysis and Automatic Data Distribution for a Skeleton-based Language. In Euro-Par'98 Parallel Processing, number 1470 of Lecture Notes in Computer Science, Southampton, UK, pages 688--697, September 1998. Springer-Verlag. [bibtex-key = Mallet98a] [bibtex-entry]


  13. S. Mallet and M. Ducassé. Pilotage d'un méta-interprète ensembliste par une trace ``relationnelle'' pour le débogage de bases de données déductives. In O. Ridoux, editor, Journées francophones de Programmation Logique et programmation par Contraintes, Nantes, pages 151-165, mai 1998. JFPLC'98, Hermes. [WWW ] Keyword(s): bases de données déductives, débogage, trace, sémantique opérationnelle, multi-SLD-AL, méta-interprète, ensembles de substitutions, instrumentation.
    Abstract:
    Le développement des bases de données déductives nécessite des outils, en particulier pour le débogage. Les bases de données déductives gèrent des quantités importantes de données et, en général, de manière ensembliste. Les systèmes d'explication existants pour les bases de données déductives ne prennent pas en compte ces contraintes. Nous proposons une technique de traçage qui consiste à intégrer une trace ``relationnelle'' avec un méta-interprète instrumenté utilisant des ensembles de substitutions. La trace relationnelle donne, de manière efficace, de l'information précise sur l'extraction de données de la base relationnelle. Le méta-interprète ensembliste gère des ensembles de substitutions et donne des explications sur la déduction. Les aspects coûteux de la méta-interprétation sont réduits par l'utilisation de la trace qui évite beaucoup de calculs. La flexibilité de la méta-interprétation est conservée. Elle permet de produire facilement des traces de profils différents.
    [bibtex-key = mallet98a] [bibtex-entry]


  14. M. Ducassé. Coca: A Debugger for C Based on Fine Grained Control Flow and Data Events. Technical report IRISA PI 1202 or INRIA RR-3489, INRIA, septembre 1998. [WWW ] Keyword(s): Software engineering, Programming environment, Automated debugging, Trace query mechanism, Debugging language, Program behavior understanding, C Debugging tool.
    Abstract:
    We present Coca, an automated debugger for C, where the breakpoint mechanism is based on events related to language constructs. Events have semantics whereas source lines used by most debuggers do not have any. A trace is a sequence of events. It can be seen as an ordered relation in a database. Users can specify precisely which events they want to see by specifying values for event attributes. At each event, visible variables can be queried. The trace query language is Prolog with a handful of primitives. The trace query mechanism searches through the execution traces using both control flow and data whereas debuggers usually search according to either control flow or data. As opposed to fully ``relational'' debuggers which use plain database querying mechanisms, Coca trace querying mechanism does not require any storage. The analysis is done on the fly, synchronously with the traced execution. Coca is therefore more powerful than ``source line'' debuggers and more efficient than relational debuggers.
    [bibtex-key = ducasse98] [bibtex-entry]


  15. V. Gouranton. Deriving analysers by folding/unfolding of natural semantics and a case study: slicing. Technical report 3413, INRIA, Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE, avr 1998. [WWW ] Keyword(s): systematic derivation, program transformation, natural semantics, proof tree, slicing analysis.
    Abstract:
    We consider specifications of analysers expressed as compositions of two functions: a semantic function, which returns a natural semantics derivation tree, and a property defined by recurrence on derivation trees. A recursive definition of a dynamic analyser can be obtained by fold/unfold program transformation combined with deforestation. We apply our framework to the derivation of a slicing analysis for a logic programming language.
    [bibtex-key = Gouranton:98:DerivingAnalysers] [bibtex-entry]


  16. V. Gouranton and D. Le Métayer. Dynamic slicing: a generic analysis based on a natural semantics format. Technical report 3375, INRIA, Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE, March 1998. [WWW ] Keyword(s): dynamic slicing analysis, natural semantics, proof tree, correctness, systematic derivation..
    Abstract:
    Slicing analyses have been proposed for different programming languages. Rather than defining a new analysis from scratch for each programming language, we would like to specify such an analysis once for all, in a language-independent way, and then specialise it for different programming languages. In order to achieve this goal, we propose a notion of natural semantics format and a dynamic slicing analysis format. The natural semantics format formalises a class of natural semantics and the analysis format is a generic, language-independent, slicing analysis. The correctness of the generic analysis is established as a relation between the derivation trees of the original program and the slice. This generic analysis is then instantiated to several programming languages conforming the semantics format (an imperative language, a logic programming language and a functional language), yielding a dynamic slicing analyser for each of these languages.
    [bibtex-key = Gouranton:98:DynamicSlicing] [bibtex-entry]


  17. T. Jensen, D. Le Métayer, and T. Thorn. Verification of control flow based security policies. Technical report 1210, IRISA, 1998. [WWW ] Keyword(s): security, verification, finite-state system, control flow, object orientation.
    Abstract:
    A fundamental problem in software-based security is whether extit{local} security checks inserted into the code are sufficient to implement a given extit{global} security policy. We introduce a formalism based on a two-level linear-time temporal logic for specifying global security policies pertaining to the control-flow of the program, and illustrate its expressive power with a number of existing policies. We define a minimalistic, security-dedicated program model that only contains procedure call and dynamic security checks and propose an automatic method for verifying that an implementation using local security constructs satisfies a global security policy. For a given formula in the temporal logic we prove that there exists a bound on the size of the states that have to be considered in order to assure the validity of the formula: this reduces the problem to finite-state model checking. Finally, we instantiate the framework to the security architecture proposed for Java (JDK~1.2).
    [bibtex-key = jensen:98:verification] [bibtex-entry]


  18. J. Mallet. Compilation of a skeleton-based parallel language through symbolic cost analysis and automatic distribution. Technical report 3436, INRIA, Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le chesnay Cedex FRANCE, Mai 1998. [WWW ] Keyword(s): skeleton-based language, parallelism, cost analysis, automatic data distribution.
    Abstract:
    We present a skeleton-based language which leads to portable and cost-predictable implementations on MIMD computers. The compilation process is described as a series of program transformations. We focus in this paper on the step concerning the distribution choice. The problem of automatic mapping of input vectors onto processors is addressed using symbolic cost evaluation. Source language restrictions are crucial since they permit to use powerful techniques on polytope volume computations to evaluate costs precisely. The approach can be seen as a cross-fertilization between techniques developed within the FORTRAN parallelization and skeleton communities.
    [bibtex-key = Mallet98b] [bibtex-entry]


1997
  1. V. Gouranton. Dérivation d'analyseurs dynamiques et statiques à partir de spécifications opérationnelles. PhD thesis, Université de Rennes I, Ifsic, Irisa, 1997. [WWW ] Keyword(s): analyse dynamique, analyse statique, sémantique naturelle, transformation de programmes, interprétation abstraite, analyse de durée de vie, analyse de nécessité, élagage de programmes.
    Abstract:
    Nous présentons un cadre de conception d'analyseurs dynamiques et d'analyseurs statiques à partir de spécifications opérationnelles. Nous proposons des spécifications d'analyseurs en deux parties : la sémantique du langage de programmation considéré et la définition de la propriété recherchée. Nous considérons le cadre de la sémantique naturelle pour la définition du langage de programmation. Ces sémantiques présentent un double avantage : elles sont à la fois compositionnelles et intentionnelles. Les propriétés sont définies comme des fonctions sur les arbres de preuves de la sémantique. Celles qui ne sont pas spécifiques à un langage donné peuvent ainsi être définies une fois pour toutes et spécialisées pour des langages particuliers. La composition de la sémantique et de la propriété définit une analyse dynamique {\em a posteriori} ; il s'agit en effet d'une fonction qui calcule dans un premier temps la trace d'exécution (arbre de preuve) complète d'un programme avant d'en extraire la propriété recherchée. Des transformations de programmes permettent d'obtenir une définition récursive autonome de l'analyseur. Cette fonction est en fait un analyseur dynamique {\em à la volée} dans le sens où il calcule la propriété recherchée au fur et à mesure de l'exécution du programme. On réalise ensuite une abstraction de cet analyseur pour obtenir un analyseur statique : on applique pour ce faire la théorie de l'interprétation abstraite. Nous illustrons notre cadre à l'aide d'exemples d'analyses spécifiques comme la durée de vie d'un langage impératif et l'analyse de nécessité d'un langage fonctionnel d'ordre supérieur. Nous montrons aussi comment une analyse d'intérêt général, en l'occurrence le filtrage de programmes, peut être définie sur un format de sémantique, puis instanciée à différents langages.
    [bibtex-key = Gouranton:97:DerivationD] [bibtex-entry]


  2. C.L. Hankin, D. Le Métayer, and D. Sands. Refining multiset transformers. Theoretical Computer Science, 1997. [WWW ] Keyword(s): program transformation, composition operator, parallelism, Gamma, chemical reaction.
    Abstract:
    Gamma is a minimal language based on local multiset rewriting with an elegant chemical reaction metaphor. In this paper, we study a notion of refinement for Gamma programs involving parallel and sequential composition operators, and derive a number of programming laws. The calculus thus obtained is applied in the development of a generic ``pipelining'' transformation, which enables certain sequential compositions to be refined into parallel compositions.
    [bibtex-key = dlm-tcs97] [bibtex-entry]


  3. T. Jensen. Disjunctive Program Analysis for Algebraic Data Types. ACM Transactions on Programming Languages and Systems, 19(5):752--804, 1997. [WWW ]
    Abstract:
    We describe how binding-time, data-flow, and strictness analyses for languages with higher-order functions and algebraic data types can be obtained by instantiating a generic program logic and axiomatization of the properties analyzed for. A distinctive feature of the analyses is that disjunctions of program properties are represented exactly. This yields analyses of high precision and provides a logical characterization of abstract interpretations involving tensor products and uniform properties of recursive data structures. An effective method for proving properties of a program based on fixed-point iteration is obtained by grouping logically equivalent formulae of the same type into equivalence classes, obtaining a lattice of properties of that type, and then defining an abstract interpretation over these lattices. We demonstrate this in the case of strictness analysis by proving that the strictness abstract interpretation of a program is the equivalence class containing the strongest property provable of the program in the strictness logic.
    [bibtex-key = Jensen:97:Disjunctive] [bibtex-entry]


  4. D. Le Métayer. Program analysis for software engineering: new applications, new requirements, new tools. ACM Sigplan Notices, (1):86-88, January 1997. [WWW ] Keyword(s): program analysis, debugging, testing, correctness, scalability, interaction.
    Abstract:
    In order to play a bigger role in software engineering tools, static analysis techniques must take into account the specific needs of this application area, in particular in terms of interaction with the user and scalability. This new perspective requires a reexamination of the field of static program analysis both internally and in connection with related areas like theorem proving and debugging.
    [bibtex-key = dlm-sig97] [bibtex-entry]


  5. T. Thorn. Programming Languages for Mobile Code. ACM Computing Surveys, 29(3):213--239, September 1997. [WWW ] Keyword(s): mobile code, distribution, network programming, portability, safety, security, object orientation, formal methods, Java, Obliq, Limbo, Safe-Tcl, Objective Caml, Telescript..
    Abstract:
    Sun's announcement of the programming language Java more than anything popularized the notion of mobile code, that is, programs travelling on a heterogeneous network and automatically executes upon arrival at the destination. We describe several classes of mobile code and we extract their common characteristics, where security proves to be one of the major concerns. With these characteristics as reference points, we examine six representative languages proposed for mobile code. The conclusion of this study leads to our recommendations for future work, illustrated by examples of ongoing research.
    [bibtex-key = Thorn97] [bibtex-entry]


  6. P. Fradet and D. Le Métayer. Shape types. In Proc. of Principles of Programming Languages, Paris, France, Jan. 1997. ACM Press. [WWW ] Keyword(s): Pointer structures and manipulations, graph grammars, type checking, program robustness, C..
    Abstract:
    Type systems currently available for imperative languages are too weak to detect a significant class of programming errors. For example, they cannot express the property that a list is doubly-linked or circular. We propose a solution to this problem based on a notion of {\em shape types} defined as context-free graph grammars. We define graphs in set-theoretic terms, and graph modifications as multiset rewrite rules. These rules can be checked statically to ensure that they preserve the structure of the graph specified by the grammar. We provide a syntax for a smooth integration of shape types in C. The programmer can still express pointer manipulations with the expected constant time execution and benefits from the additional guarantee that the property specified by the shape type is an invariant of the program.
    [bibtex-key = popl97] [bibtex-entry]


  7. V. Gouranton. Un cadre générique de définition d'analyseurs dynamiques et statiques. In Journées du GDR Programmation, Rennes, November 1997. [WWW ] Keyword(s): analyse dynamique, analyse statique, slicing, sémantique opérationnelle.
    Abstract:
    Dans \cite{Gouranton:97:DerivationD}, nous présentons un cadre de conception d'analyseurs dynamiques et d'analyseurs statiques à partir de spécifications opérationnelles. Nous montrons notamment comment une analyse d'intérêt général, en l'occurrence le filtrage ({\em slicing}) de programmes, peut être définie sur un format de sémantique, puis instanciée à différents langages. Dans cet article, nous rappelons les principaux points du cadre générique de définition d'analyseurs dynamiques et statiques présenté dans \cite{Gouranton:97:DerivationD} et nous montrons comment notre cadre est appliqué au cas du filtrage d'un langage impératif.
    [bibtex-key = vg_gdr97] [bibtex-entry]


  8. V. Gouranton and D. Le Métayer. Formal development of static program analysers. In The 8th Israel Conference on Computer Systems and Sofware Engineering, IEEE, IFCC, ISySe'97, Herzliya, Israel, June 1997. [WWW ] Keyword(s): functional languages, natural semantics, neededness analysis, paths analysis, program transformation, optimising compilers.
    Abstract:
    We propose an approach for the formal development of static analysers which is based on transformations of inference systems. The specification of an analyser is made of two components: an operational semantics of the programming language and the definition of a property by recurrence on the proof trees of the operational semantics. The derivation is a succession of specialisations of inference systems with respect to properties on their proof trees. In this paper, we illustrate the methodology with the derivation of analysers for a non strict functional language.
    [bibtex-key = vgdlm97] [bibtex-entry]


  9. A. A. Holzbacher, M. Périn, and M. Südholt. Modeling railway control systems using graph grammars: a case study. In 2nd International Conference on COORDINATION, LNCS 1282, September 1997. Springer Verlag. Note: Long version published as technical report, INRIA, no. 3210: see ftp://ftp.inria.fr/INRIA/publication/publi-ps-gz/RR/RR-3210.ps.gz. [WWW ] Keyword(s): software architecture, graph grammar, software evolution.
    Abstract:
    In this paper we develop in three phases a railway control system. We are mainly concerned with the \emph{software architecture} of the control system and its \emph{dynamic evolution}; we do not discuss here the implementation details of the components forming the control system. First, we informally discuss our design proposal for the architecture of the control system: a hierarchy of controllers whose leaves are local controllers connected in a network that mimics the underlying railway topology. Second, we formally define by means of particular \emph{graph grammars} a style of software architectures for the railway control system consisting of two complementary \emph{views} and ensuring several desirable properties by construction. The dynamic evolution of the architecture is modelled by a set of \emph{coordination rules} which define graph transformations and are \emph{verified} w.r.t. to the graph grammar. Third, using a coordination rule as a formal specification of a dynamic modification of the railway control system, we derive its implementation in ConCoord, a programming environment for concurrent coordinated programming. With regard to software engineering, the two first phases belong to the system design while the third one forms the first implementation step.
    [bibtex-key = hps97] [bibtex-entry]


  10. S. Mallet and M. Ducassé. An Informal Presentation of DDB Trees: A Basis for Deductive Database Explanations. In Ulrich Geske, editor, DDLP'97, Fifth International Workshop on Deductive Databases and Logic Programming, July 1997. GMD-Studien. Keyword(s): debugging, explanations, deductive databases, logic programming. [bibtex-key = mallet97b] [bibtex-entry]


  11. S. Mallet and M. Ducassé. DDB trees: a basis for deductive database explanations. In Mariam Kamkar, editor, AADEBUG'97,Third International Workshop on Automated Debugging, Linköping, Sweden, pages 87-102, May 1997. [WWW ] Keyword(s): debugging, explanations, deductive databases, logic programming.
    Abstract:
    The power of deductive systems in general is that programs express what should be done and not how it should be done. Nevertheless, deductive systems need debugging and explanation facilities. Indeed, their operational semantics is less abstract than the declarative semantics of the programs. If users have to understand all the low level details of the operational semantics much of the benefits of using a deductive system is lost. Existing explanation systems for deductive databases produce proof trees to be shown to users. Although useful, proof trees give a fragmented view of query evaluations, and users face a, most of the time large, forest of proof trees. We propose a new data structure, called the DDB tree, which merges the information of a proof tree forest into one concise tree. A DDB tree gives a global picture of a query evaluation in a dramatically reduced structure with no loss of information. DDB trees can be shown to users or can be analyzed further by an explanation system.
    [bibtex-key = mallet97a] [bibtex-entry]


  12. P. Pepper and M. Südholt. Deriving Parallel Numerical Algorithms using Data Distribution Algebras: Wang's Algorithm. In Proc. of the 30rd Hawaii International Conference on System Sciences, January 1997. IEEE. [WWW ] Keyword(s): functional, parallel programming, skeleton, data distribution algebra, partition algorithm.
    Abstract:
    Parallel and distributed programming are much more difficult than the development of sequential algorithms due to data distribution issues and communication requirements. This paper presents a methodology that enables the abstract description of the distribution of data structures by means of overlapping covers that form \emph{data distribution algebras}. Algorithms are formulated and derived by transformation in a functional environment using \emph{skeletons} i.e. higher-order functions with specific parallel implementations. Communication is specified implicitly through the access to overlapping parts of covers. Such specifications enable the derivation of explicit lower-level communication statements. We illustrate the concepts by a complete derivation of Wang's partition algorithm for the solution of tridiagonal systems of linear equations.
    [bibtex-key = ps97a] [bibtex-entry]


  13. M. Südholt, C. Piepenbrock, K. Obermayer, and P. Pepper. Solving Large Systems of Differential Equations using Covers and Skeletons. In 50th Working Conference on Algorithmic Languages and Calculi, February 1997. Chapman & Hall. [WWW ] Keyword(s): functional, parallel programming, numerical algorithm, program transformation, skeleton, data distribution algebra.
    Abstract:
    The design and implementation of parallel algorithms for distributed memory architectures is much harder than the development of sequential algorithms. This is mainly due to the communication and synchronization that is necessary to manage distributed data correctly. This paper applies a methodology for the transformational derivation of parallel programs using \emph{data distribution algebras} that enable an abstract description of data distribution issues. Algorithms are formulated using \emph{skeletons}, that is, specialized higher-order functions with particular parallel implementations. The methodology is applied to a the solution of a \emph{system of ordinary differential equations} where convolutions can be computed using the Fast Fourier transformation. The example illustrates the practical optimization problems for a development model of the visual system that involves large scale neural network simulations. Finally, this algorithm is compared to an implementation of the same system of equations in the programming language C* on a CM-5.
    [bibtex-key = spop97a] [bibtex-entry]


  14. L. van Aertryck, M. Benveniste, and D. Le Métayer. CASTING: A formally based software test generation method. In The 1st International Conference on Formal Engineering Methods, IEEE, ICFEM'97, Hiroshima, Japan, November 1997. [WWW ] Keyword(s): test cases, test suites, test generation, uniformity hypothesis, formal method, validation, fonctionnal testing, structural testing, attributed grammar, constraint solving.
    Abstract:
    In this paper, we present CASTING, a Computer Assisted Software Test engineering method. The method, supported by a prototype tool, generates realistic software test suites in a formal and semi-automatic way. Based on a two-layer modular architecture, CASTING is not tied to any particular style of input. Initially designed to engineer functional test suites from formal specifications, CASTING can easily be instantiated to accept other formal description levels, seamlessly supporting the main industrial testing techniques, ranging from functional testing to structural testing. A simple process scheduler specification is used in this paper to illustrate the interaction facilities of CASTING and to provide a sample of the resulting test suites.
    [bibtex-key = lvambdlm97] [bibtex-entry]


1996
  1. J.-M. Andréoli, C.L. Hankin, and D. Le Métayer. Coordination Programming: mechanisms, models and semantics. Word Scientific Publishing, IC Press, 1996.
    Abstract:
    This book focuses on a class of coordination models where multiple pieces of software, possibly concerned with a wide range of domains and not necessarily built to work together, coordinate their activities through some shared dataspace and engage in interactions which are metaphorically analogous to "chemical" reactions. Exemplars of this class are Gamma, LO, Linda, and Shared Prolog. They have been extensively studied in the framework of the Esprit project BRA-9102 "Coordination", in which all the contributors of this book have been involved to some extent
    [bibtex-key = ahm96] [bibtex-entry]


  2. G. Burn and D. Le Métayer. Proving the correctness of compiler optimisations based on a global analysis: a study of strictness analysis. Journal of Functional Programming, 6(1):75-109, 1996. [WWW ]
    Abstract:
    A substantial amount of work has been devoted to the proof of correctness of various program analyses but much less attention has been paid to the correctness of compiler optimisations based on these analyses. In this paper, we tackle the problem in the context of strictness analysis for lazy functional languages. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler and it exposes the various optimisations made possible by a strictness analysis
    [bibtex-key = bm96a] [bibtex-entry]


  3. J.-P. Banâtre and D. Le Métayer. Gamma and the chemical reaction model: ten years after. In Coordination programming: mechanisms, models and semantics, IC Press, 1996. World Scientific Publishing. [WWW ]
    Abstract:
    Gamma was originally proposed in 1986 as a formalism for the definition of programs without artificial sequentiality. The basic idea underlying the formalism is to describe computation as a form of chemical reaction on a collection of individual pieces of data. Due to the very minimal nature of the language, and its absence of sequential bias, it has been possible to exploit this initial paradigm in various directions. This paper reviews most of the work done by various groups along these lines and the current perspectives of our own research on Gamma. For the sake of clarity, we separate the contributions in three categories: (1) the relevance of the chemical reaction model for software engineering, (2) extensions of the original model and (3) implementation issues
    [bibtex-key = bm96b] [bibtex-entry]


  4. R. Douence and P. Fradet. Décrire et comparer les implantations de langages fonctionnels. In Journées francophones des langages applicatifs, Collection INRIA Didactique, Val-Morin, Québec, Canada, pages 183-203, Jan. 1996. [WWW ] Keyword(s): Compilation, optimizations, program transformation, CAM, Krivine Machine, Tabac..
    Abstract:
    Nous proposons un cadre formel pour décrire et comparer les implantations de langages fonctionnels. Nous décrivons le processus de compilation comme une suite de transformations de programmes dans le cadre fonctionnel. Les choix fondamentaux de mise en oeuvre ainsi que les optimisations s'expriment naturellement comme des transformations différentes. Les avantages de cette approche sont de décomposer et de structurer la compilation, de simplifier les preuves de correction et de permettre des comparaisons formelles en étudiant chaque transformation ou leur composition. Nous nous concentrons ici sur l'appel par valeur et décrivons trois implantations différentes: la Cam, Tabac et une version stricte de la machine de Krivine.
    [bibtex-key = jfla96] [bibtex-entry]


  5. M. Ducassé and J. Noyé. Tracing Prolog without a tracer. In N. Fuchs and U. Geske, editors, Proceedings of the poster session at JICSLP'96, pages 223-232, September 1996. GMD- Forschungszentrum Informationstechnik GMBH, GMD-STUDIEN Nr.296, ISBN3-88457-296-2. Note: One page abstract also appears in Proc. of the JICSLP'96, MIT Press, ISBN 0-262-63173-3.
    Abstract:
    Tracing by automatic program source instrumentation has major advantages over compiled code instrumentation: it is cheaper to develop and more portable, it benefits from most compiler optimizations, it produces traces in terms of the original program, and it can be tailored to specific debugging needs. The usual argument in favor of compiled code instrumentation is its supposed efficiency. Tolmach and Appel 1 designed and implemented a tracer for Standard ML based on automatic program source instrumentation. The resulting code runs only 3 times slower than optimized code. They conjectured that a low-level tracer would run at about the same speed. However they had no reasonable low-level tracer at hand to actually compare their results with. We have performed such a comparison in the context of Prolog, using the ECRC ECLiPSe environment. The built-in low-level tracer of ECLiPSe is, at present, one of the most interesting tracers for Prolog. We have compared it with an instrumentation based on O'Keefe's "advice" utility 2 , made compatible with the ECLiPSe tracer. We traced "standard" Prolog benchmark programs 3 with both tracing techniques and measured the resulting CPU times. On average the performances of both implementations are equivalent: tracing Prolog programs by program instrumentation is no slower than using a low-level tracer. To our knowledge, this is the first time that a quantitative comparison of both approaches is made. Another contribution is that our source instrumentation is more complete than O'Keefe's advice package. In particular, it deals with built-in predicates, and allows predicates to be skipped/unskipped. ---------------------------------------- 1 A. Tolmach and A.W. Appel. A debugger for Standard ML. Journal of Functional Programming, 5(2):155-200, April 1995. 2 The "advice" utility is part of the DEC10 Prolog library, available by anonymous ftp from the AIAI of the University of Edinburg (aiai.edinburgh.ac.uk). 3 P. Van Roy and A. M. Despain. High-performance logic programming with the Aquarius Prolog compiler. Computer, 25(1):54-68, January 1992.
    [bibtex-key = dn96] [bibtex-entry]


  6. P. Fradet, R. Gaugne, and D. Le Métayer. Static detection of pointer errors: an axiomatisation and a checking algorithm. In Proc. European Symposium on Programming, ESOP'96, volume 1058 of LNCS, Linköping, Sweden, pages 125-140, April 1996. Springer-Verlag. [WWW ] Keyword(s): Debugging tool, alias analysis, Hoare's logic..
    Abstract:
    The incorrect use of pointers is one of the most common source of bugs. As a consequence, any kind of static code checking capable of detecting potential bugs at compile time is welcome. This paper presents a static analysis for the detection of incorrect accesses to memory (dereferences of invalid pointers). A pointer may be invalid because it has not been initialised or because it refers to a memory location which has been deallocated. The analyser is derived from an axiomatisation of alias and connectivity properties which is shown to be sound with respect to the natural semantics of the language. It deals with dynamically allocated data structures and it is accurate enough to handle circular structures.
    [bibtex-key = esop96] [bibtex-entry]


  7. P. Fradet and D. Le Métayer. Type checking for a multiset rewriting language. In Proc. of the LOMAPS workshop on Analysis and Verification of Multiple-agent Languages, volume 1192 of LNCS, pages 126-140, 1996. [WWW ] Keyword(s): multiset rewriting, graph grammars, type checking, invariant, verification.
    Abstract:
    We enhance Gamma, a multiset rewriting language, with a notion of structured multiset. A structured multiset is a set of addresses satisfying specific relations which can be used in the rewriting rules of the program. A type is defined by a context-free graph grammar and a structured multiset belongs to a type T if its underlying set of addresses satisfies the invariant expressed by the grammar defining T. We define a type checking algorithm which allows us to prove mechanically that a program maintains its data structure invariant
    [bibtex-key = fm96] [bibtex-entry]


  8. P. Louvet and O. Ridoux. Parametric Polymorphism for Typed Prolog and $\lambda$Prolog. In 8th Int. Symp. Programming Languages Implementation and Logic Programming, volume 1140 of LNCS, Aachen, Germany, pages 47--61, 1996. [WWW ] Keyword(s): Logic programming, typing, polymorphism, second-order lambda-calculus..
    Abstract:
    Typed Prolog and LambdaProlog are logic programming languages with a strict typing discipline which is based on simple types with variables. These variables are interpreted as denoting generic polymorphism. Experiments show that this discipline does not handle properly common logic programming practices used in Prolog. For instance, the usual transformation for computing the Clark completion of a Prolog program does not work well with some typed programs. We observe that the head-condition is at the heart of these problems, and conclude that it should be enforced. We propose a second-order scheme which is compatible with usual practices. In this scheme, type variables denote parametric polymorphism. It allows quantifying types and terms, passing type and term parameters to goals and terms, and to express type guards for selecting goals. We give its syntax and deduction rules, and propose a solution to keep the concrete notation of programs close to the usual one.
    [bibtex-key = louvet:parametric:plilp:96] [bibtex-entry]


  9. D. Le Métayer. Software architecture styles as graph grammars. In In Proc of the ACM SIGSOFT Symposium of the foundations of Software Engineering, pages 15-23, 1996. [WWW ] Keyword(s): coordination, graph rewriting, software architecture, static verification.
    Abstract:
    We present a formalism for the definition of software architectures in terms of graphs. Nodes represent the individual agents and edges define their interconnection. Individual agents can communicate only along the links specified by the architecture. The dynamic evolution of the overall architecture is defined independently by a coordinator. An architecture style is a class of architectures characterised by a graph grammar. The rules of the coordinator are statically checked to ensure that they preserve the constraints imposed by the architecture style
    [bibtex-key = met96] [bibtex-entry]


  10. D. Le Métayer and D. Schmidt. Structural Operational Semantics as a basis for static program analysis. In ACM Computing Surveys, pages 340-343, 1996. [WWW ]
    Abstract:
    In this paper, we argue that structural operational semantics (SOS) is an appropriate style for the definition of static analyses because it is both structural and intensional. Another significant quality of the SOS framework is that it can accomodate various classes of languages, and thus forms an ideal basis for the design of a program analysis platform
    [bibtex-key = dlm9] [bibtex-entry]


  11. O. Ridoux. Engineering Transformations of Attributed Grammars in $\lambda$Prolog. In M. Maher, editor, Joint Int. Conf. and Symp. Logic Programming, pages 244--258, 1996. MIT Press. [WWW ] Keyword(s): Syntax-directed translation, grammar transformations, logic grammars, DCG, LambdaProlog..
    Abstract:
    An abstract representation for grammar rules that permits an easy implementation of several attributed grammar transformations is presented. It clearly separates the actions that contribute to evaluating attribute values from the circulation of these values, and it makes it easy to combine the representations of several rules in order to build the representation of new rules. This abstract form applies well to such transforms as elimination of left-recursion, elimination of empty derivation, unfolding and factorization. Finally, the technique is applied to DCGs and a LambdaProlog implementation of the abstract form and of the transforms is described.
    [bibtex-key = ridoux:engineering:jicslp:96] [bibtex-entry]


  12. S. Schoenig and M. Ducassé. A Backward Slicing Algorithm for Prolog. In R. Cousot and D.A. Schmidt, editors, Static Analysis Symposium, Aachen, pages 317-331, September 1996. Springer-Verlag, LNCS 1145. [WWW ]
    Abstract:
    Slicing is a program analysis technique originally developed by Weiser for imperative languages. Weiser showed that slicing is a natural tool for debugging, but it has other numerous applications (program integration, program optimization, etc.) In this article we describe a backward slicing algorithm for Prolog which produces executable slices. The proposed algorithm is applicable at least to pure Prolog extended by some simple built-in predicates that handle the explicit unification =/2 and arithmetic. To our knowledge, this algorithm is the first one to be proposed for Prolog. Because of the indeterminism and lack of explicit control flow of Prolog, existing algorithms cannot be trivially adapted. The two main contributions of this paper are a general definition of slicing adapted to Prolog and a slicing algorithm that produces executable programs.
    [bibtex-key = sm96] [bibtex-entry]


  13. S. Schoenig and M. Ducassé. Slicing pour programmes Prolog. In Actes des journées GDR programmation'96, Orléans, Novembre 1996. Université de Bordeaux I. [WWW ]
    Abstract:
    Le slicing est une technique d'analyse de programme développée à l'origine par Weiser pour les langages impératifs. Weiser a montré que le slicing est un outil naturel de débogage, mais il a également de nombreuses autres applications (intégration de programmes, optimisation, etc.) Dans cet article, nous proposons une définition du slicing pour Prolog et un algorithme. Celui-ci est au moins applicable à Prolog pur étendu par quelques prédicats de base (=/2 et arithmétiques). À notre connaissance, cet algorithme est le premier à être proposé pour Prolog. Les spécificités de Prolog (indéterminisme et manque de flot de contr\^ole explicite), ne permettent pas d'adapter trivialement les algorithmes existants pour langages impératifs.
    [bibtex-key = sm96b] [bibtex-entry]


  14. R. Douence and P. Fradet. A taxonomy of functional language implementations. Part I: Call-by-Value. Technical report 2783, INRIA, Jan. 1996. [WWW ] Keyword(s): Compilation, optimizations, program transformation, lambda-calculus, combinators..
    Abstract:
    We present a unified framework to describe and compare functional language implementations. We express the compilation process as a succession of program transformations in the common framework. At each step, different transformations model fundamental choices or optimizations. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. It also paves the way to formal comparisons by estimating the complexity of individual transformations or compositions of them. We focus on call-by-value implementations, describe and compare the diverse alternatives and classify well-known abstract machines. This work also aims to open the design space of functional language implementations and we suggest how distinct choices could be mixed to yield efficient hybrid abstract machines.
    [bibtex-key = taxo-1] [bibtex-entry]


  15. R. Douence and P. Fradet. A taxonomy of functional language implementations. Part II: Call-by-Name, Call-by-Need and Graph Reduction.. Technical report 3050, INRIA, Nov. 1996. [WWW ] Keyword(s): Compilation, optimizations, program transformation, lambda-calculus, combinators..
    Abstract:
    In Part I , we proposed an approach to formally describe and compare functional languages implementations. We focused on call-by-value and described well-known compilers for strict languages. Here, we complete our exploration of the design space of implementations by studying call-by-name, call-by-need and graph reduction. We express the whole compilation process as a succession of program transformations in a common framework. At each step, different transformations model fundamental choices or optimizations. We describe and compare the diverse alternatives for the compilation of the call-by-name strategy in both environment and graph-based models. The different options for the compilation of beta-reduction described in part I can be applied here as well. Instead, we describe other possibilities specific to graph reduction. Call-by-need is nothing but call-by-name with redex sharing and update. We present how sharing can be expressed in our framework and we describe different update schemes. We finally classify some well-known call-by-need implementations.
    [bibtex-key = taxo-2] [bibtex-entry]


  16. P. Fradet, R. Gaugne, and D. Le Métayer. An inference algorithm for the static verification of pointer manipulation. Technical report 980, IRISA, 1996. [WWW ] Keyword(s): Debugging tool, alias analysis, Hoare's logic..
    Abstract:
    The incorrect use of pointers is one of the most common source of bugs. As a consequence, any kind of static code checking capable of detecting potential bugs at compile time is welcome. This paper presents a static analysis for the detection of incorrect accesses to memory (dereferences of invalid pointers). A pointer may be invalid because it has not been initialised or because it refers to a memory location which has been deallocated. The analyser is derived from an axiomatisation of alias and connectivity properties which is shown to be sound with respect to the natural semantics of the language. It deals with dynamically allocated data structures and it is accurate enough to handle circular structures.
    [bibtex-key = pi980] [bibtex-entry]


  17. P. Fradet and D. Le Métayer. Structured Gamma. Technical report 989, IRISA, 1996. [WWW ] Keyword(s): multiset rewriting, type checking, invariant, verification, refinement.
    Abstract:
    The Gamma language is based on the chemical reaction metaphor which has a number of benefits with respect to parallelism and program derivation. But the original definition of Gamma does not provide any facility for data structuring or for specifying particular control strategies. We address this issue by introducing a notion of structured multiset which is a set of addresses satisfying specific relations and associated with a value. The relations can be seen as a form of neighbourhood between the molecules of the solution; they can be used in the reaction condition of the program or transformed by the action. A type is defined by a context-free graph grammar and a structured multiset belongs to a type T if its underlying set of addresses satisfies the invariant expressed by the grammar defining T. We define a type checking algorithm which allows us to prove mechanically that a program maintains its data structure invariant. We illustrate the significance of the approach for program reasoning and program refinement
    [bibtex-key = fm96b] [bibtex-entry]


1995
  1. M. Ducassé, editor. Proceedings of the 2nd International Workshop on Automated and Algorithmic Debugging, Saint Malo, France, May 1995. IRISA, Campus de Beaulieu, F-35042 Rennes cedex. Note: See http://www.irisa.fr/EXTERNE/manifestations/AADEBUG95/. [bibtex-key = duc95b] [bibtex-entry]


  2. C. Belleannée, P. Brisset, and O. Ridoux. Une reconstruction pragmatique de $\lambda$Prolog. Technique et science informatiques, 14:1131--1164, 1995. [WWW ] Keyword(s): Programmation logique, LambdaProlog, lambda-calcul, quantifications, types. logic programming, LambdaProlog, lambda-calculus, quantifications, types..
    Abstract:
    LambdaProlog est un langage de programmation logique dont les clauses et les termes généralisent ceux de Prolog. On peut se demander si toutes ces extensions sont nécessaires simultanément et si des langages intermédiaires intéressants ne pourraient pas être définis, au moins dans un but pédagogique. Nous répondons à cette question en montrant que des liens de nécessité conduisent à adopter toutes les extensions à partir de l'introduction du nouveau domaine de termes. De cette reconstruction découle une heuristique de programmation par induction sur les types qui est un guide commode pour utiliser LambdaProlog. LambdaProlog is a logic programming language in which clauses and terms are more general than in Prolog. One may wonder whether these extensions are simultaneously needed, and what are the useful subsets of LambdaProlog, at least for pedagogical purposes. We answer this question by exhibiting necessity links from the addition of the new term domain to the extension of the formula language. A handy heuristic for programming by induction on types can be derived from these links.
    [bibtex-key = belleannee:reconstruction:tsi:95] [bibtex-entry]


  3. S. Coupet-Grimal and O. Ridoux. On the use of Advanced Logic Programming Languages in Computational Linguistics. J. Logic Programming, 24(1&2):121--159, 1995. [WWW ] Keyword(s): Logic programming, computational linguistics, LambdaProlog, Prolog II, lambda-terms, rational terms..
    Abstract:
    Computational Linguistics and Logic Programming have strong connections, but the former uses concepts that are absent from the most familiar implementations of the latter. We advocate that a Logic Programming language need not feature the Computational Linguistics concepts exactly, it must only provide a logical way of dealing with them. We focus on the manipulation of higher-order terms and the logical handling of context, and we show that the advanced features of Prolog~II and LambdaProlog are useful for dealing with these concepts. Higher-order terms are native in LambdaProlog, and Prolog~II's infinite trees provide a handy data-structure for manipulating them. The formula language of LambdaProlog can be transposed in the Logic Grammar realm to allow for a logical handling of context.
    [bibtex-key = coupet:use:jlp:95] [bibtex-entry]


  4. C.L. Hankin and D. Le Métayer. Lazy types and program analysis. Science of Computer Programming, 1995.
    Abstract:
    Approaches to static analysis based on non-standard type systems have received considerable interest recently. Most work has concentrated on the relationship between such analyses and abstract interpretation. In this paper, we focus on the problem of producing efficient algorithms from such type-based analyses. The key idea is the notion of lazy types. We present the basic notions in the context of a higher-order strictness analysis of list-processing functions. We also sketch some recent work concerning a general framework for program analysis based on these ideas. We conclude with some experimental results
    [bibtex-key = hm95] [bibtex-entry]


  5. J.-P. Banâtre, C. Bryce, and D. Le Métayer. An Approach to information security in distributed systems. In Proceedings of the 5th IEEE International Workshop on Future Trends in Distributed Computing Systems, pages 384-394, 1995. [WWW ]
    Abstract:
    Information flow control mechanisms detect and prevent illegal transfers of information within a computer system. In this paper, we give an overview of a programming language based approach to information flow control in a system of communicating processes. The language chosen to present the approach is CSP. We give the security semantics of CSP and show, with the aid of examples, how the semantics can be used to conduct both manual and automated security proofs of application programs
    [bibtex-key = bbm95] [bibtex-entry]


  6. R. Douence and P. Fradet. Towards a Taxonomy of Functional Language Implementations. In Proc. of 7th Int. Symp. on Programming Languages: Implementations, Logics and Programs, volume 982 of LNCS, Utrecht, the Netherlands, pages 34-45, 1995. Springer-Verlag. [WWW ] Keyword(s): Compilation, optimizations, program transformation, lambda-calculus, combinators..
    Abstract:
    We express implementations of functional languages as a succession of program transformations in a common framework. At each step, different transformations model fundamental choices or optimizations. A benefit of this approach is to structure and decompose the implementation process. The correctness proofs can be tackled independently for each step and amount to proving program transformations in the functional world. It also paves the way to formal comparisons by estimating the complexity of individual transformations or compositions of them. We focus on call-by-value implementations, describe and compare the diverse alternatives and classify well-known abstract machines. This work also aims to open the design space of functional language implementations and we suggest how distinct choices could be mixed to yield efficient hybrid abstract machines.
    [bibtex-key = plilp95] [bibtex-entry]


  7. M. Ducassé. Automated Debugging Extensions of the Opium Trace Analyser. In Proceedings of the 2nd International Workshop on Automated and Algorithmic Debugging, Saint Malo, France, May 1995. IRISA, Campus de Beaulieu, F-35042 Rennes cedex. [WWW ]
    Abstract:
    Traces of program executions tell how programs behave in given cases. They are a helpful source of information for automated debugging. Opium is an automated trace analyser for Prolog programs. It is programmable and extendable. It provides a trace query language and abstract views of executions as a basis for automated debugging. Opium has shown its capabilities to build abstract tracers and automated debugging facilities. This paper lists the extensions written so far, and describes two recent extensions: the abstract tracers for the LO (Linear Objects) language and for the CHR (Constraint Handling Rules) language.
    [bibtex-key = duc95] [bibtex-entry]


  8. D. Le Métayer. Proving properties of programs defined over recursive data structures. In ACM Symposium on Partial Evaluation and Semantics-Based Program Manipulation, La Jolla, California, pages 88-99, 1995. ACM PRESS. [WWW ]
    Abstract:
    We present a technique for the mechanical proof of correctness properties of programs. We define a language of properties over recursive data structures and an inference system to assign properties to programs. In order to be able to design a tractable inference algorithm, we impose restrictions on both the programming language and the language of properties. We show that these restrictions do not preclude the treatment of interesting programs and properties. As an example, our analyser is able to prove automatically that a sort program returns a list of non-increasing values
    [bibtex-key = met95] [bibtex-entry]


  9. O. Ridoux. Imagining CLP$(\Lambda,\equiv_{\alpha\beta})$. In A. Podelski, editor, Constraint Programming: Basics and Trends. Selected papers of the 22nd Spring School in Theoretical Computer Science. LNCS 910, Châtillon/Seine, France, pages 209--230, 1995. [WWW ] Keyword(s): CLP, LambdaProlog, lambda-calculus..
    Abstract:
    We study under which conditions the domain of lambda-terms and the equality theory of the lambda-calculus form the basis of a usable constraint logic programming language (CLP). The conditions are that the equality theory must contain axiom {$\eta$}, and the formula language must depart from Horn clauses and accept universal quantifications and implications in goals. In short, CLP-lambda must be close to LambdaProlog.
    [bibtex-key = ridoux:imagining:sctcs:95] [bibtex-entry]


  10. V. Gouranton. Une analyse de globalisation basée sur une semantique naturelle. Technical report, Journées du GRD Programmation, Grenoble, November 1995. [WWW ] Keyword(s): langages fonctionnels, semantique operationnelle, analyse de globalisation, optimisation de compilateurs.
    Abstract:
    Nous définissons une analyse de globalisation pour un langage fonctionnel strict par des raffinements successifs de la sémantique naturelle du langage. Nous obtenons une sémantique instrumentée de traces puis un système d'inférence générique montré correct par rapport à cette sémantique. Il faudra alors spécialiser ce sysème d'inférence pour la propriété considérée définie par récurrence sur les traces.
    [bibtex-key = gouranton:95:uneanalyse] [bibtex-entry]


  11. V. Gouranton and D. Le Métayer. Derivation of static analysers of functional programs from path properties of a natural semantics. Technical report 2607, Inria, 1995. [WWW ] Keyword(s): functional languages, operational semantics, neededness analysis, paths analysis, optimising compilers.
    Abstract:
    We advocate the use of operational semantics as a basis for specifying program analyses for functional languages. We put forward a methodology for defining a static analysis by successive refinements of the natural semantics of the language. We use paths as the abstract representation of proof trees and we provide a language for defining properties in terms of paths (neededness, absence, uniqueness,...) and the mechanical derivation of the corresponding analyses
    [bibtex-key = vm95] [bibtex-entry]


1994
  1. M. Ducassé and J. Noyé. Logic Programming Environments: Dynamic program analysis and debugging. The Journal of Logic Programming, 19/20:351-384, May/July 1994. Note: Anniversary issue: Ten years of Logic Programming. [WWW ]
    Abstract:
    Programming environments are essential for the acceptance of programming languages. This survey emphasizes that program analysis, both static and dynamic, is the central issue of programming environments. Because their clean semantics makes powerful analysis possible, logic programming languages have an indisputable asset in the long term. This survey is focused on logic program analysis and debugging. The large number of references provided show that the field, though maybe scattered, is active. A unifying framework is given which separates environment tools into extraction, analysis, and visualization. It facilitates the analysis of existing tools and should give some guide lines to develop new ones. Achievements in logic programming are listed; some techniques developed for other languages are pointed out, and some trends for further research are drawn. Among the main achievements are algorithmic debugging, tracing for sequential Prolog, and abstract interpretation. The main missing techniques are slicing, test case generation, and program mutation. The perspectives we see are integration, evaluation and, above all, automated static and dynamic analysis.
    [bibtex-key = dn94] [bibtex-entry]


  2. J.-P. Banâtre, C. Bryce, and D. Le Métayer. Compile-time detection of information flow in sequential programs. In Proc. European Symposium on Research in Computer Security, volume 875 of LNCS, 1994. Springer Verlag. [WWW ] Keyword(s): formal verification, program analysis, verification tools, computer security, information flow.
    Abstract:
    We give a formal definition of the notion of information flow for a simple guarded command language. We propose an axiomatisation of security properties based on this notion of information flow and we prove its soundness with respect to the operational semantics of the language. We then identify the sources of non determinism in proofs and we derive in successive steps an inference algorithm which is both sound and complete with respect to the inference system
    [bibtex-key = bbm94] [bibtex-entry]


  3. P. Brisset and O. Ridoux. The Architecture of an Implementation of $\lambda$Prolog: Prolog/Mali. In ILPS'94 Workshop on Implementation Techniques for Logic Programming Languages, 1994. Note: Extended version of \citebo92b. [WWW ] Keyword(s): See \citebo92b.
    Abstract:
    See \cite{bo92b}
    [bibtex-key = brisset:architecture:ilpsw:94] [bibtex-entry]


  4. P. Fradet. Collecting More Garbage. In Proc. of ACM Conf. on Lisp and Functional Programming, Orlando, FL, USA, pages 24-33, June 1994. ACM Press. [WWW ] Keyword(s): Garbage collection, space leaks, typing, parametricity..
    Abstract:
    We present a method, adapted to polymorphically typed functional languages, to detect and collect more garbage than existing GCs. It can be applied to strict or lazy higher order languages and to several garbage collection schemes. Our GC exploits the information on utility of arguments provided by polymorphic types of functions. It is able to detect garbage that is still referenced from the stack and may collect useless parts of otherwise useful data structures. We show how to partially collect shared data structures and to extend the type system to infer more precise information. We also present how this technique can plug several common forms of space leaks.
    [bibtex-key = lisp94] [bibtex-entry]


  5. P. Fradet. Compilation of Head and Strong Reduction. In Proc. of the 5th European Symposium on Programming, volume 788 of LNCS, Edinburgh, UK, pages 211-224, April 1994. Springer-Verlag. [WWW ] Keyword(s): lambda-calculus, continuations, strong reduction, head reduction, compilation..
    Abstract:
    Functional language compilers implement only weak-head reduction. However, there are cases where head normal forms or full normal forms are needed. Here, we study how to use cps conversion for the compilation of head and strong reductions. We apply cps expressions to a special continuation so that their head or strong normal form can be obtained by the usual weak-head reduction. We remain within the functional framework and no special abstract machine is needed. Used as a preliminary step our method allows a standard compiler to evaluate under lambda's.
    [bibtex-key = esop94] [bibtex-entry]


  6. C.L. Hankin and D. Le Métayer. A type-based framework for program analysis. In Proc. Static Analysis Symposium, Springer Verlag, volume 864 of LNCS, pages 380-394, 1994. [WWW ]
    Abstract:
    In this paper, we present a general framework for type-based analyses of functional programs. Our framework is parameterised by a set of types to represent properties and interpretations for constants in the language. To construct a new analysis the user needs only to supply a model for the types (which properties they denote) and sound rules for the constants. We identify the local properties that must be proven to guarantee the correctness of a specific analysis and algorithm. We illustrate the approach by recasting Hunt and Sand's binding time analysis in our framework. Furthermore, we report on experimental results suggesting that our generic inference algorithm can provide the basis for an efficient program analyser
    [bibtex-key = hm94c] [bibtex-entry]


  7. C.L. Hankin and D. Le Métayer. Deriving algorithms from type inference systems: Application to strictness analysis. In Proc. ACM Symposium on Principles of Programming Languages, pages 202-212, 1994. [WWW ]
    Abstract:
    The role of non-standard type inference in static program analysis has been much studied recently. Early work emphasised the efficiency of type inference algorithms and paid little attention to the correctness of the inference system. Recently more powerful inference systems have been investigated but the connection with efficient inference algorithms has been obscured. The contribution of this paper is twofold: first we show how to transform a program logic into an algorithm and second, we introduce the notion of lazy types and show how to derive an efficient algorithm for strictness analysis
    [bibtex-key = hm94] [bibtex-entry]


  8. C.L. Hankin and D. Le Métayer. Lazy type inference for the strictness analysis of lists. In Proc. 5th European Symposium on Programming, Springer Verlag, volume LNCS, pages 211-224, 1994. [WWW ]
    Abstract:
    We present a type inference system for the strictness analysis of lists and we show that it can be used as the basis for an efficient algorithm. The algorithm is as accurate as the usual abstract interpretation technique. One distinctive advantage of this approach is that it is not necessary to impose an abstract domain of a particular depth prior to the analysis: the lazy type algorithm will instead explore the part of a potentially infinite domain required to prove the strictness property
    [bibtex-key = hm94b] [bibtex-entry]


  9. D. Le Métayer. Higher-order multiset programming. In Proc. DIMACS Series in Discrete Mathematics and Theoretical Computer Science, volume 18, 1994. American Mathematical Society. [WWW ]
    Abstract:
    Gamma is a kernel language in which programs are described in terms of multiset transformations following the chemical reaction metaphor. Previous work has demonstrated its significance for the construction of massively parallel programs but also highlighted some limitations. We introduce a higher-order extension of this formalism unifying the program and data syntactic categories. As a consequence, active programs can be inserted into multisets. This generalisation has far reaching effects and we show its relevance from both the formal and the practical point of view. In particular, we present a program defining the chemical abstract machine in higher-order Gamma. We show that more sophisticated control strategies can be expressed within the language and we put them into practice by deriving a scan vector model program suitable for execution on fine-grained parallel machines
    [bibtex-key = met94] [bibtex-entry]


1993
  1. P. Brisset and O. Ridoux. Continuations in $\lambda$Prolog. In D.S. Warren, editor, 10th Int. Conf. Logic Programming, pages 27--43, 1993. MIT Press. [WWW ] Keyword(s): LambdaProlog, compilation, continuation, exception handling..
    Abstract:
    Continuations are well know in functional programming where they have been used to transform and compile programs. Some languages provide explicit manipulations of the continuation for the user: The user can catch and modify the current continuation. Continuations have also been used in the logic programming context to give a denotational semantics for Prolog, to generate Prolog compilers and to transform Prolog programs. In this paper, we propose to introduce new built-ins in a logic programming language to enable the user to explicitly replace the continuations. These built-ins allow the user to have a new control of the execution. We choose LambdaProlog because of its higher-order syntax and implications in the goals which are necessary for the definition and use of these built-ins. In order to define the built-ins, we extend to LambdaProlog the Prolog semantics based on continuations. Then, we show that an exception mechanism can be easily implemented using these new built-ins. The proposed semantics is also used to prove equivalence of goals changing the continuations.
    [bibtex-key = brisset:continuations:iclp:93] [bibtex-entry]


  2. M. Ducassé. A pragmatic survey of automated debugging. In P. Fritzson, editor, Proceedings of the First Workshop on Automated and Algorithmic Debugging, volume 749 of Lecture Notes in Computer Sciences, Linkoeping, Sweden, May 1993. Springer-Verlag. [WWW ]
    Abstract:
    This article proposes a structuring view of the area of automated debugging. Nineteen automated debugging systems are analyzed. Thirteen existing automated debugging techniques are briefly evaluated from a pragmatic point of view. The three underlying strategies are identified, namely verification with respect to specification, checking with respect to language knowledge and filtering with respect to symptom. The verification strategy compares the actual program with some formal specification of the intended program. The checking strategy looks for suspect places which do not comply with some explicit knowledge of the programming language. The filtering strategy assumes correct parts of the code which cannot be responsible for the error symptom. Assertion evaluation and algorithmic debugging are the most promising verification techniques. Some intrinsic limitations of the checking strategy makes it only a complementary, though helpful, debugging support. The slicing technique should be included in any debugger.
    [bibtex-key = duc93b] [bibtex-entry]


  3. S. Le Huitouze, P. Louvet, and O. Ridoux. Les grammaires logiques et $\lambda$Prolog. In Journées Francophones sur la Programmation en Logique, Nîmes, France, pages 93--108, 1993. Teknea. Note: Version française de \citellr93. [WWW ] Keyword(s): LambdaProlog, grammaires logiques, portée, représentation du contexte..
    Abstract:
    La plupart des systèmes Prolog proposent un formalisme de grammaire logique appelé DCG (Definite Clause Grammar), dont l'utilité est reconnue. Nous présentons deux nouveaux formalismes de grammaire logique appelé DCG' et lambda-HHG (higher-order Hereditary Harrop Grammar)---grammaires héréditaires de Harrop d'ordre supérieur) destinés à être utilisés dans les systèmes LambdaProlog. Les relations entre DCG, DCG', et lambda-HHG, d'une part, et entre Prolog et LambdaProlog, d'autre part, peuvent être résumées de la manière suivante. (1) Prolog, DCG et la traduction de DCG en Prolog sont classiques. (2) Miller propose l'évolution de Prolog à LambdaProlog, et Pereira, Pareschi et Miller montrent l'intérêt d'utiliser LambdaProlog pour le traitement de la langue naturelle. (3) Nous proposons une variante fortement typée de DCG (appelée) afin de pouvoir la traduire en LambdaProlog dans le système LambdaProlog. C'est un premier pas vers un formalisme plus élaboré. (4) lambda-HHG est à DCG ce que LambdaProlog est à Prolog. Ce formalisme combine les avantages d'être grammatical et de cacher les opération d'un analyseur (comme DCG), et d'avoir des termes d'ordre supérieur comme attributs et de proposer une approche logique à la représentation des contextes (comme LambdaProlog).
    [bibtex-key = lehuitouze:grammaires:jfpl:93] [bibtex-entry]


  4. S. Le Huitouze, P. Louvet, and O. Ridoux. Logic Grammars and $\lambda$Prolog. In D.S. Warren, editor, 10th Int. Conf. Logic Programming, pages 64--79, 1993. MIT Press. [WWW ] Keyword(s): LambdaProlog, logic grammars, scope, context handling in syntactic analysis..
    Abstract:
    A logic grammar formalism called DCG (Definite Clause Grammars), which has proved to be useful, is part of most Prolog implementations. We develop two new logic grammar formalisms called DCG' and lambda-HHG (higher-order Hereditary Harrop Grammars) that can be used in LambdaProlog implementations. The relations between DCG, DCG', and lambda-HHG, and Prolog and LambdaProlog can be summarized as follows: (1) The language Prolog, the DCG formalism, and the translation of DCG into Prolog by Prolog are classical. (2) The evolution from Prolog to LambdaProlog is due to Miller and the advantage of using LambdaProlog for doing natural language analysis is shown by Pereira, and Pareschi and Miller. (3) We propose a strongly typed variant of DCG (called DCG') for its translation into LambdaProlog by LambdaProlog. It is a first stage towards a more elaborate formalism. (4) A formalism that is to DCG what LambdaProlog is to Prolog is still missing, and also the way to translate it into LambdaProlog. Such a formalism combines the advantage of being grammatical and hiding the house-keeping operations (like DCG) and of having higher-order terms as attributes and providing a logical approach to context (like LambdaProlog). lambda-HHG is such a formalism.
    [bibtex-key = llr93] [bibtex-entry]


  5. P. Brisset and O. Ridoux. The Compilation of $\lambda$Prolog and its execution with MALI. Rapport de recherche 1831, INRIA, 1993. [WWW ] Keyword(s): See \citebo92a.
    Abstract:
    See \cite{bo92a}
    [bibtex-key = brisset:compilation:inria:93] [bibtex-entry]


1992
  1. M. Ducassé, Y.-J. Lin, and L.Ü. Yalcinalp, editors. Proceedings of IJCSLP'92 Workshop on Logic Programming Environments, November 1992. Note: Technical Report TR 92-143, Case Western Reserve University, Cleveland. [bibtex-key = dly92] [bibtex-entry]


  2. M. Ducassé. An extendable trace analyser to support automated debugging. PhD thesis, University of Rennes I, France, June 1992. Note: European Doctorate.
    Abstract:
    The dissertation describes the innovative features of Opium, a high-level debugging environment for Prolog, designed and implemented at ECRC between 1985 and 1991. Debugging is a costly process, and automating it would significantly reduce the cost of software production and maintenance. However, it is unrealistic to aim at fully automating the task. In particular programmers have to understand rapidly changing situations, examining large amounts of data. In the current state of the art it is beyond the capabilities of computers to take the place of programmer's understanding. Nevertheless, computers can significantly help programmers to select the data to be analysed. The data used by program analysis in general is often restricted to the source code of the analysed programs. However, there is a complementary source of information, namely traces of program executions. An execution trace contains less general information than the program source, but it tells how the program behaves in a particular case. Furthermore, there are intrinsically dynamic aspects in a program which are best analysed at execution time, for example uses of read/write. These remarks suggested to build the automated debugging functionalities of Opium on top of an existing tracer, extending it to a general trace and source analyser. The most important features of Opium, not to be found in other debuggers, are as follows. - It provides a trace query language which is a solution to the ever growing command sets of other tracers. With two primitives plus Prolog, users can already specify more precise trace queries than with the hard coded commands of other tracers. - Opium is programmable and extendable. It is thus an environment where debugging strategies can be easily programmed and integrated. Some strategies are already implemented. - Abstract views of executions are proposed as a basis for automated debugging. They help users to understand the behaviours of programs by browsing through executions at a higher level than single steppers. Opium is fully implemented. More than 20 academic sites have recently requested Opium prototype, and some are actually implementing new abstract views.
    [bibtex-key = duc92] [bibtex-entry]


  3. Y. Bekkers, O. Ridoux, and L. Ungaro. Dynamic Memory Management for Sequential Logic Programming Languages. In Y. Bekkers and J. Cohen, editors, Int. Worshop on Memory Management, volume 637 of LNCS, pages 82--102, 1992. Springer-Verlag. [WWW ] Keyword(s): Memory management, logic programming, garbage collection, usefulness logic..
    Abstract:
    Logic programming languages are becoming more complex with the introduction of new features such as constraints or terms with an equality theory. With this increase in complexity, they require more and more sophisticated memory management. This survey gives an insight into the memory management problems in sequential logic programming languages implementations; it also describes the presently know solutions. It is meant to be understood by non-specialists in logic programming with good knowledge of memory management in general. We first describe a "usefulness logic" for run-time objects. Usefulness logic defines non-garbage objects. Next, memory management systems are presented from the most trivial original run-time system, with no real concern for memory problems, to elaborated run-time systems with memory management closely observing the usefulness logic. Finally, the choice of a garbage collection technique is discussed in relation with logic programming specifities.
    [bibtex-key = bekkers:dynamic:iwmm:92] [bibtex-entry]


  4. P. Brisset and O. Ridoux. The Architecture of an Implementation of $\lambda$Prolog: Prolog/Mali. In Workshop on $\lambda$Prolog, Philadelphia, 1992. [WWW ] Keyword(s): LambdaProlog, implementation, compilation, memory management..
    Abstract:
    LambdaProlog is a logic programming language accepting a more general clause form than standard Prolog (namely hereditary Harrop formulas instead of Horn formulas) and using simply typed lambda-terms as a term domain instead of first order terms. Despite these extensions, it is still amenable to goal-directed proofs and can still be given procedural semantics. However, the execution of LambdaProlog programs requires several departures from the standard resolution scheme. First, the augmented clause form causes the program (a set of clauses) and the signature (a set of constants) to be changeable, but in a very disciplined way. Second, the new term domain has a semi-decidable and infinitary unification theory, and it introduces the need for a beta-reduction operation at run-time. MALI is an abstract memory that is suitable for storing the search-state of depth-first search processes. Its main feature is its efficient memory management. We have used an original LambdaProlog-to-C translation: predicates are transformed into functions operating on several continuations. The compilation scheme is sometimes an adaptation of the standard Prolog scheme, but at other times it has to handle new features such as types, beta-reduction and delayed unification. Two keywords of this implementation are "sharing" and "folding" of representations. Sharing amounts to recognising that some representation already exists and reusing it. Folding amounts to recognising that two different representations represent the same thing and replacing one by the other. We assume a basic knowledge of Prolog and LambdaProlog.
    [bibtex-key = bo92b] [bibtex-entry]


  5. M. Ducassé. A general trace query mechanism based on Prolog. In M. Bruynooghe and M. Wirsing, editors, International Symposium on Programming Language Implementation and Logic Programming, volume 631 of Lecture Notes in Computer Science, pages 400-414, August 1992. Springer-Verlag. [WWW ]
    Abstract:
    We present a general trace query language which is a solution to the ever growing command sets of other tracers. It provides all the required generality while being very simple and efficient. We model a program execution into a trace which is a stream of events. Execution events have a uniform representation, and can be analysed by Prolog programs. With this approach and thanks to the expressive power of Prolog, two high-level primitives plus Prolog are enough to provide a general trace query language. With a few optimizations this language can work on large executions without any loss of performance, if compared to traditional tracers. This paper describes the trace query mechanism from its high level specification down to some implementation details. The proposed model of trace query depends only on the sequentiality of the execution, and the principles behind the design of the optimizations do not depend on the traced language.
    [bibtex-key = duc92f] [bibtex-entry]


  6. M. Ducassé. A trace analyser to prototype explanations. In Proceedings of JICSLP'92 Workshop on Logic Programming Environments, Washington D.C., November 1992. Note: Technical Report TR 92-143, Case Western Reserve University, Cleveland.
    Abstract:
    Automated debugging and expert system explanations have in common to aim at helping people to understand executions. We have designed an extendable trace analyser for the purpose of automated debugging, which we propose to use to prototype expert system explanations. Two examples illustrate how simple it is to implement abstract tracing of executions and how easy it is to play with them.
    [bibtex-key = duc92b] [bibtex-entry]


  7. M. Ducassé. Analysis of failing Prolog executions. In Actes des Journées Francophones sur la Programmation Logique, Mai 1992.
    Abstract:
    The result of a Prolog execution can simply be ``no'', when the programmer is expecting something else. This symptom is typical of Prolog, and especially requires the help of an execution tracer to get clues of what the problem can be. We present a solution which helps programmers to understand how unexpected failures have occurred. We first propose a hierarchy of failing goals. We argue that there is one kind of leaf failures which is interesting to track at the first place. Then we give the algorithm for our leaf failure tracking and two examples illustrating its use.
    [bibtex-key = duc92c] [bibtex-entry]


  8. M. Ducassé. Opium: An advanced debugging system. In G. Comyn and N. Fuchs, editors, Proceedings of the Second Logic Programming Summer School, September 1992. Esprit Network of Excellence in Computational Logic COMPULOG-NET, Springer-Verlag, LNAI 636. [WWW ]
    Abstract:
    The data used by program analysis in general is often restricted to the source code of the analysed programs. However, there is a complementary source of information, namely traces of program executions. Usual tracers, which extract this trace information, do not allow for general trace analysis. Opium, our debugger for Prolog, sets up a framework where program sources and traces of program executions can be jointly analysed. As the debugging process is heuristic and not all the debugging strategies have been identified so far, Opium is programmable. In particular, its trace query language gives more flexibility and more power than the hard coded command sets of usual tracers. This trace query language is based on Prolog. Opium is therefore both a helpful tool for Prolog and a nice application of Prolog. The most innovative extensions of Opium compute abstract views of Prolog executions to help users understand the behaviours of programs. In particular they help them understand how error symptoms have been produced. This article briefly recalls some general information about Opium. A debugging session is then commented in detail.
    [bibtex-key = duc92d] [bibtex-entry]


  9. P. Brisset and O. Ridoux. The Compilation of $\lambda$Prolog and its execution with MALI. Publication Interne 687, IRISA, 1992. [WWW ] Keyword(s): LambdaProlog, implementation, compilation, memory management..
    Abstract:
    We present a compiled implementation of LambdaProlog that uses the abstract memory MALI for representing the execution state. LambdaProlog is a logic programming language allowing a more general clause form than Standard Prolog's (namely hereditary Harrop formulas instead of Horn formulas) and using simply typed lambda-terms as a term domain instead of first order terms. The augmented clause form causes the program (a set of clauses) and the signature (a set of constants) to be changeable in a very disciplined way. The new term domain has a semi-decidable and infinitary unification theory, and it introduces the need for a beta-reduction operation at run-time. MALI is an abstract memory that is suitable for storing the search-state of depth-first search processes. Its main feature is its efficient memory management. We have used an original LambdaProlog-to-C translation along which predicates are transformed into functions operating on continuations for handling failure and success in unifications, and changes in signatures and programs. Two keywords of this implementation are ``sharing'' and ``folding'' of representations. Sharing amounts to recognising that some representation already exists and to reuse it. Folding amounts to recognising that two different representations represent the same thing and to replace one by the other.
    [bibtex-key = bo92a] [bibtex-entry]


1991
  1. M. Ducassé and G. Ferrand, editors. Proceedings of ICLP'91 Workshop on Logic Programming Environments, June 1991. Note: Technical Report, University of Orléans, France, LIFO N 91-61. [bibtex-key = df91] [bibtex-entry]


  2. M. Ducassé and A.-M. Emde. Opium: a debugging environment for Prolog development and debugging research. ACM Software Engineering Notes, 16(1):54-59, January 1991. Note: Demonstration presented at the Fourth Symposium on Software Development Environments.
    Abstract:
    Opium is an extensible debugging environment for PROLOG providing high-level debugging facilities for programmers and debugging experts. In the design of debuggers there are two tasks which are often mixed, extraction and analysis of debugging information. The aim of the extraction task is to collect the whole debugging information so that users do not miss any important information about their program. On the other hand, the aim of the analysis task is to restrict in an accurate way the amount of debugging information shown to the user so that the latter has to examine only the relevant parts. This task clearly depends on the debugging situation and, to our point of view, there is no general restriction which can be done a priori. However, the two tasks are usually mixed and hard-coded, the result is that not enough relevant information and too much useless information is displayed. In Opium the two tasks are clearly separated. The extraction module collects the whole debugging information (execution trace and program source) which is then available for the analysis module. The presentation concentrates on the analysis module, discussing the main aspects of Opium: programmability, high-level debugging, extensibility mechanisms, meta-debugging, support for end-users and debugging experts.
    [bibtex-key = de91d] [bibtex-entry]


  3. P. Fradet and D. Le Métayer. Compilation of functional languages by program transformation. ACM Transactions on Programming Languages and Systems, 13(1):21--51, 1991. [WWW ]
    Abstract:
    One of the most important issues concerning functional languages is the efficiency and the correctness of their implementation. We focus on sequential implementations for conventional von Neumann computers. The compilation process is described in terms of program transformations in the functional framework. The original functional expression is transformed into a functional term that can be seen as a traditional machine code. The two main steps are the compilation of the computation rule by the introduction of continuation functions and the compilation of the environment management using combinators. The advantage of this approach is that we do not have to introduce an abstract machine, which makes correctness proofs much simpler. As far as efficiency is concerned, this approach is promising since many optimizations can be described and formally justified in the functional framework.
    [bibtex-key = TOPLAS91] [bibtex-entry]


  4. P. Brisset and O. Ridoux. Naïve Reverse Can Be Linear. In K. Furukawa, editor, 8th Int. Conf. Logic Programming, pages 857--870, 1991. MIT Press. [WWW ] Keyword(s): LambdaProlog, implementation, function-lists, higher-order unification..
    Abstract:
    We propose a new implementation of logic programming with higher-order terms. In order to illustrate the properties of our implementation, we apply the coding of lists as functions to the context of logic programming. As a side-effect, we show that higher-order unification is a good tool for manipulating the function-lists. It appears that the efficiency of the program thus obtained relies critically upon the implementation of higher-order operations (unification and reduction). In particular, we show that a good choice for data-structures and reduction strategy yields a linear naïve reverse.
    [bibtex-key = brisset:naivereverse:iclp:91] [bibtex-entry]


  5. M. Ducassé. Abstract views of Prolog executions in Opium. In V. Saraswat and K. Ueda, editors, Proceedings of the International Logic Programming Symposium, San Diego, USA, pages 18-32, October 1991. MIT Press. [WWW ]
    Abstract:
    Opium is a system for analysing and debugging Prolog programs. Its kernel comprises an execution tracer and a programming language with a full set of primitives for trace and source analysis. In this paper we show the power of Opium for supporting abstract views of Prolog executions. Abstract views give high-level points of view about the execution. They filter out irrelevant details; they restructure the remaining information; and they compact it so that the amount of information given at each step has a reasonable size. The examples of abstract views given in the following are a goal execution profile, some data abstractions, an instantiation profile, a failure analysis and a kind of explanation for an expert system written in Prolog.
    [bibtex-key = duc91c] [bibtex-entry]


  6. P. Fradet. Syntactic Detection of Single-Threading Using Continuations. In Proc. of the 5th ACM Conf. on Functional Prog. Lang. and Comp. Arch., volume 523 of LNCS, Cambridge, MA, USA, pages 241-258, August 1991. Springer-Verlag. [WWW ] Keyword(s): Globalization, single-threading, in-place update, CPS conversion..
    Abstract:
    We tackle the problem of detecting global variables in functional programs. We present syntactic criteria for single-threading which improves upon previous solutions (both syntactic and semantics-based) in that it applies to higher-order languages and to most sequential evaluation strategies. The main idea of our approach lies in the use of continuations. One advantage of continuation expressions is that evaluation ordering is made explicit in the syntax of expressions. So, syntactic detection of single-threading is simpler and more powerful on continuation expressions. We present the application of the analysis to the compilation of functional languages, semantics-directed compiler generation and globalization-directed transformations (i.e. transforming non-single-threaded expressions into single-threaded ones). Our results can also be turned to account to get single-threading criteria on regular lambda-expressions for different sequential evaluation orders.
    [bibtex-key = fpca91] [bibtex-entry]


  7. M. Ducassé and A.-M. Emde. A High-level Debugging Environment for Prolog. Opium User's Manual. Technical Report TR-LP-60, ECRC, May 1991. [WWW ] [bibtex-key = de91] [bibtex-entry]



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.




Last modified: Mon Feb 26 15:35:09 2007
Author: ferre.


This document was translated from BibTEX by bibtex2html