BACK TO INDEX

All publications sorted by year

2007

  1. S. Hong Tuan Ha. Programmation par aspects et tissage de propriétés - Application à l'ordonnancement et à la disponibilité.. PhD thesis, Université Rennes 1, January 2007. [bibtex-key = hongtuanha:phd:07] [bibtex-entry]


  2. G. Le Guernic. Confidentiality Enforcement Using Dynamic Information Flow Analyses. PhD thesis, Université Rennes 1 and Kansas State University, September 2007. [bibtex-key = Guernic:phd:07] [bibtex-entry]


  3. Benjamin Blanc, Arnaud Gotlieb, and Claude Michel. Constraints in Software Testing, Verification and Analysis. In Frédéric Benhamou, Narendra Jussien, and Barry O'Sullivan, editors, Trends in Constraint Programming, Part 7, pages 333-368. ISTE, London, UK, May 2007. [bibtex-key = wscp/part7] [bibtex-entry]


  4. A. Gotlieb, T. Denmat, and B. Botella. Goal-oriented test data generation for pointer programs. Information and Software Technology, 49(9-10):1030-1044, Sep. 2007. [bibtex-key = GDB07] [bibtex-entry]


  5. Elvira Albert, Miguel Gómez-Zamalloa, Laurent Hubert, and Germán Puebla. Verification of Java Bytecode Using Analysis and Transformation of Logic Programs. In PADL, pages 124-139, 2007. [PDF] [bibtex-key = albert07:_verif_java_bytec_trans_analy] [bibtex-entry]


  6. Gilles Barthe, David Pichardie, and Tamara Rezk. A Certified Lightweight Non-Interference Java Bytecode Verifier. In Proc. of 16th European Symposium on Programming (ESOP'07), volume 4421 of Lecture Notes in Computer Science, pages 125-140, 2007. Springer-Verlag. [bibtex-key = BPR07] [bibtex-entry]


  7. Frédéric Besson, Thomas Jensen, and Tiphaine Turpin. Small Witnesses for Abstract Interpretation-Based proofs. In Proc. of 16th European Symposium on Programming, volume 4421 of Lecture Notes in Computer Science, pages 268-283, 2007. Springer. [bibtex-key = BJT07] [bibtex-entry]


  8. Y. Boichut, T. Genet, Y. Glouche, and O. Heen. Using Animation to Improve Formal Specifications of Security Protocols. In Joint conference SAR-SSI, 2007. [bibtex-key = BoichutGGH-SAR07] [bibtex-entry]


  9. Y. Boichut, T. Genet, T. Jensen, and L. Leroux. Rewriting Approximations for Fast Prototyping of Static Analyzers. In RTA, volume 4533 of LNCS, pages 48-62, 2007. Springer Verlag. [bibtex-key = BoichutGJL-RTA07] [bibtex-entry]


  10. F. Charreteur, B. Botella, and A. Gotlieb. Modelling dynamic memory management in Constraint-Based Testing. In TAIC-PART (Testing: Academic and Industrial Conference), Windsor, UK, Sep. 2007. [bibtex-key = CBG07] [bibtex-entry]


  11. T. Denmat, A. Gotlieb, and M. Ducasse. An Abstract Interpretation Based Combinator for Modeling While Loops in Constraint Programming. In Proceedings of Principles and Practices of Constraint Programming (CP'07), Springer Verlag, LNCS 4741, Providence, USA, pages 241-255, Sep. 2007. [bibtex-key = DGD07a] [bibtex-entry]


  12. T. Denmat, A. Gotlieb, and M. Ducasse. Improving Constraint-Based Testing with Dynamic Linear Relaxations. In 18th IEEE International Symposium on Software Reliability Engineering (ISSRE' 2007), Trollhättan, Sweden, Nov. 2007. [bibtex-key = DGD07b] [bibtex-entry]


  13. G. Le Guernic. Automaton-based Confidentiality Monitoring of Concurrent Programs. In Proceedings of the 20th IEEE Computer Security Foundations Symposium (CSFS20), JUL 6--8 2007. IEEE Computer Society.
    Noninterference is typically used as a baseline security policy to formalize confidentiality of secret i nformation manipulated by a program. In contrast to static checking of noninterference, this paper consid ers dynamic, au\-to\-maton-based, monitoring of information flow for a single execution of a concurrent p rogram. The monitoring mechanism is based on a combination of dynamic and static analyses. During program execution, abstractions of program events are sent to the automaton, which uses the abstractions to track information flows and to control the execution by forbidding or editing dangerous actions. All monitore d executions are proved to be noninterfering (soundness) and executions of programs that are well-typed i n a security type system similar to the one of Smith and Volpano are proved to be unaltered by the monitor (partial transparency).
    [bibtex-key = LeGuernic2007acmocp] [bibtex-entry]


  14. G. Le Guernic. Information Flow Testing. In Proceedings of the Annual Asian Computing Science Conference, Lecture Notes in Computer Science, DEC 9--11 2007. Note: To appear.
    Noninterference, which is an information flow property, is typically used as a baseline security policy to formalize confidentiality of secret information manipulated by a program. Noninterference verification mechanisms are usually based on static analyses and, to a lesser extent, on dynamic analyses. In contrast to those works, this paper proposes an information flow testing mechanism. This mechanism is sound from the point of view of noninterference. It is based on standard testing techniques and on a combination of dynamic and static analyses. Concretely, a semantics integrating a dynamic information flow analysis is proposed. This analysis makes use of static analyses results. This special semantics is built such that, once a path coverage property has been achieved on a program, a sound conclusion regarding the noninterfering behavior of the program can be established.
    [bibtex-key = leguernic07ift] [bibtex-entry]


  15. G. Le Guernic and J. Perret. FLIC: Application to Caching of a Dynamic Dependency Analysis for a 3D Oriented CRS. In Proceedings of the International Workshop on Rule-Based Programming, Electronic Notes in Theoretical Computer Science (ENTCS), JUN 29 2007. Elsevier. Note: To appear.
    FL-systems are conditional rewriting systems. They are used for programming (describing) and evaluating (generating) huge 3D virtual environments, such as cities and forests. This paper presents a formal semantics and a dynamic dependency analysis for FL-systems. This analysis allows the characterization of a set of terms which are joinable with the currently rewritten term. Consequently, it is possible to speed up the rewriting steps of the environments generation by using a cache mechanism which is smarter than standard ones. This work can be seen as a dynamic completion of a set of rewriting rules. This completion increases the number of terms which are rewritten in normal form by the application of a single rewriting rule.
    [bibtex-key = leguernic07flic] [bibtex-entry]


  16. M. Petit and A. Gotlieb. Boosting Probabilistic Choice Operators. In Proceedings of Principles and Practices of Constraint Programming, Springer Verlag, LNCS 4741, Providence, USA, pages 559-573, September 2007.
    Probabilistic Choice Operators (PCOs) are convenient tools to model uncertainty in CP. They are useful to implement randomized algorithms and stochastic processes in the concurrent constraint framework. Their implementation is based on the random selection of a value inside a finite domain according to a given probability distribution. Unfortunately, the probabilistic choice of a PCO is usually delayed until the probability distribution is completely known. This is inefficient and penalizes their broader adoption in real-world applications. In this paper, we associate to PCO a filtering algorithm that prunes the variation domain of its random variable during constraint propagation. Our algorithm runs in O(n) where n denotes the size of the domain of the probabilistic choice. Experimental results show the practical interest of this approach.
    [bibtex-key = PG07] [bibtex-entry]


  17. M. Petit and A. Gotlieb. Uniform Selection of Feasible Paths as a Stochastic Constraint Problem. In Proceedings of International Conference on Quality Software (QSIC'07), IEEE, Portland, USA, October 2007.
    Automatic structural test data generation is a real challenge of Software Testing. Statistical structural testing has been proposed to address this problem. This testing method aims at building an input probability distribution to maximize the coverage of some structural criteria. Under the all_paths testing objective, statistical structural testing aims at selecting each feasible path of the program with the same probability. In this paper, we propose to model a uniform selector of feasible paths as a stochastic constraint program. Stochastic constraint programming is an interesting framework which combines stochastic decision problem and constraint solving. This paper reports on the translation of uniform selection of feasible paths problem into a stochastic constraint problem. An implementation which uses the library PCC(FD) of SICStus Prolog designed for this problem is detailed. First experimentations, conducted over a few academic examples, show the interest of our approach.
    [bibtex-key = PG07b] [bibtex-entry]


  18. F. Besson, T. Jensen, D. Pichardie, and T. Turpin. Result certification for relational program analysis. Research Report 6333, Inria, September 2007. [WWW] [bibtex-key = BJPT:polycert:techreport] [bibtex-entry]


  19. Frédéric Besson, Thomas Jensen, and Tiphaine Turpin. Computing stack maps with interfaces. Technical report 1879, Irisa, 2007. [WWW] [bibtex-key = BJT:Stackmaps] [bibtex-entry]


  20. David Cachera, Thomas Jensen, and Pascal Sotin. Long-Run Cost Analysis by Approximation of Linear Operators over Dioids. Research Report 6338, INRIA, 10 2007. [WWW] [bibtex-key = CacJenSotRR07] [bibtex-entry]


  21. G. Le Guernic. Automaton-based Non-interference Monitoring of Concurrent Programs. Technical report 2007-1, Kansas State University, 234 Nichols Hall, Manhattan, KS 66506, USA, February 2007. [WWW]
    Earlier work [LGBJS06] presents an automaton-based non-interference monitoring mechanism for sequential programs. This technical report extends this work to a concurrent setting. Monitored programs are constituted of a set of threads running in parallel. Those threads run programs equivalent to those of [LGBJS06] except for the inclusion of a synchronization command. The monitoring mechanism is still based on a security automaton and on a combination of dynamic and static analyses. As in [LGBJS06], the monitoring semantics sends abstractions of program events to the automaton, which uses the abstractions to track information flows and to control the execution by forbidding or editing dangerous actions. All monitored executions are proved to be non-interfering (soundness).
    [bibtex-key = LeGuernic2007anmocp] [bibtex-entry]


  22. G. Le Guernic. Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Technical report 2007-5, Kansas State University, 234 Nichols Hall, Manhattan, KS 66506, USA, July 2007. [WWW] [PDF]
    This report proposes a dynamic noninterference analysis for sequential programs. This analysis is well-suited for the development of a monitor enforcing the absence of information flows between the secret inputs and the public outputs of a program. This implies a sound detection of information flows and a sound correction of forbidden flows during the execution. The monitor relies on a dynamic information flow analysis. For unexecuted pieces of code, this dynamic analysis uses any context sensitive static information flow analysis which respects a given set of three hypotheses. The soundness of the overall monitoring mechanism with regard to noninterference enforcement is proved, as well as its higher precision than the automaton-based mechanism proposed in previous work.
    [bibtex-key = LeGuernic2007dnaucssa] [bibtex-entry]


2006

  1. Frédéric Besson, Thomas Jensen, and David Pichardie. Proof-Carrying Code from Certified Abstract Interpretation to Fixpoint Compression. Theoretical Computer Science, 364(3):273-291, 2006. [bibtex-key = TCSAppSem:BessonJensenPichardie] [bibtex-entry]


  2. B. Botella, A. Gotlieb, and C. Michel. Symbolic execution of floating-point computations. The Software Testing, Verification and Reliability journal, 2006. Note: To appear. Keyword(s): Symbolic execution, Floating-point computations, Automatic test data generation, Constraint solving.
    Symbolic execution is a classical program testing technique which evaluates a selected control flow path with symbolic input data. A constraint solver can be used to enforce the satisfiability of the extracted path conditions as well as to derive test data. Whenever path conditions contain floating-point computations, a common strategy consists of using a constraint solver over the rationals or the reals. Unfortunately, even in a fully IEEE-754 compliant environment, this leads not only to approximations but also can compromise correctness: a path can be labelled as infeasible although there exists floating-point input data that satisfy it. In this paper, we address the peculiarities of the symbolic execution of program with floating-point numbers. Issues in the symbolic execution of this kind of programs are carefully examined and a constraint solver is described that supports constraints over floating-point numbers. Preliminary experimental results demonstrate the value of our approach
    [bibtex-key = BGM05] [bibtex-entry]


  3. Gilles Barthe, Julien Forest, David Pichardie, and Vlad Rusu. Defining and Reasoning About Recursive Functions: A Practical Tool for the Coq Proof Assistant.. In Proc of 8th International Symposium on Functional and Logic Programming (FLOPS'06), volume 3945 of Lecture Notes in Computer Science, pages 114-129, 2006. Springer-Verlag. [bibtex-key = BartheFPR06] [bibtex-entry]


  4. F. Besson. Fast Reflexive Arithmetic Tactics: the linear case and beyond. In Types for Proofs and Programs (TYPES'06), volume 4502 of LNCS, pages 48-62, 2006. Springer. [PDF] [bibtex-key = Besson:types06] [bibtex-entry]


  5. Frédéric Besson, Guillaume Dufay, and Thomas Jensen. A Formal Model of Access Control for Mobile Interactive Devices. In 11th European Symposium On Research In Computer Security (ESORICS'06), volume 4189 of Lecture Notes in Computer Science, 2006. Springer. [WWW]
    This paper presents an access control model for programming applications in which the access control to resources can employ user interaction to obtain the necessary permissions. This model is inspired by and improves on the Java MIDP security architecture used in Java-enabled mobile telephones. We consider access control permissions with multiplicities in order to allow to use a permission a certain number of times. An operational semantics of the model and a formal definition of what it means for an application to respect the security model is given. A static analysis which enforces the security model is defined and proved correct. A constraint solving algorithm implementing the analysis is presented.
    [bibtex-key = ESORICS-06-BJD] [bibtex-entry]


  6. F. Besson, T. Jensen, and D. Pichardie. A PCC Architecture based on Certified Abstract Interpretation. In Proc. of 1st International Workshop on Emerging Applications of Abstract Interpretation (EAAI'06), ENTCS, 2006. Springer-Verlag. [WWW]
    Proof-Carrying Code (PCC) is a technique for downloading mobile code on a host machine while ensuring that the code adheres to the host's security policy. We show how certified abstract interpretation can be used to build a PCC architecture where the code producer can produce program certificates automatically. Code consumers use proof checkers derived from certified analysers to check certificates. Proof checkers carry their own correctness proofs and accepting a new proof checker amounts to type checking the checker in Coq. Fixpoint compression techniques are used to obtain compact certificates. The PCC architecture has been evaluated experimentally on a byte code language for which we have designed an interval analysis that allows to generate certificates ascertaining that no array-out-of-bounds accesses will occur.
    [bibtex-key = EAAI:BessonJensenPichardie] [bibtex-entry]


  7. B. Blanc, F. Bouquet, A. Gotlieb, B. Jeannet, T. Jéron, B. Legeard, B. Marre, C. Michel, and M. Rueher. The V3F project. In Proceedings of Workshop on Constraints in Software Testing, Verification and Analysis (CSTVA'06), Nantes, France, Sep. 2006. [bibtex-key = BBG06] [bibtex-entry]


  8. Y. Boichut and T. Genet. Feasible Trace Reconstruction for Rewriting Approximations.. In RTA, volume 4098 of LNCS, pages 123-135, 2006. Springer. [bibtex-key = BoichutG-RTA06] [bibtex-entry]


  9. N. Bonnel and G. Le Guernic. Système de recherche de méthodes Java basé sur leur signature. In Proceedings of Majecstic 2006, November 2006. [PDF]
    L'objectif de cet article est de proposer une démarche permettant de mettre en place un moteur de recherche de méthodes au sein d'un langage de programmation. Un tel outil s'avère particulièrement utile aux développeurs. Des solutions ont déjà été proposées mais elles sont pour la plupart basées sur une recherche textuelle, c'est-à-dire uniquement basées sur le contenu textuel de la description des différentes méthodes. Nous proposons dans cet article une nouvelle approche basée sur la signature des méthodes. Le langage utilisé tout au long de cet article est le langage Java.
    [bibtex-key = leguernic06jmbrowser] [bibtex-entry]


  10. Y. Glouche, T. Genet, O. Heen, and O. Courtay. A Security Protocol Animator Tool for AVISPA. In ARTIST-2 workshop on security of embedded systems, Pisa (Italy), 2006. [WWW] [bibtex-key = Animator] [bibtex-entry]


  11. A. Gotlieb and P. Bernard. A Semi-empirical Model of Test Quality in Symmetric Testing: Application to Testing Java Card APIs. In Sixth International Conference on Quality Software (QSIC'06), Beijing, China, Oct. 2006. [bibtex-key = GB06] [bibtex-entry]


  12. A. Gotlieb, B. Botella, and M. Watel. Inka: Ten years after the first ideas. In 19th International Conference on Software & Systems Engineering and their Applications (ICSSEA'06), Paris, France, Dec. 2006. [bibtex-key = GBW06] [bibtex-entry]


  13. A. Gotlieb and M. Petit. Path-oriented Random Testing. In Proceedings of the 1st ACM Int. Workshop on Random Testing (RT'06), Portland, Maine, July 2006. [bibtex-key = GP06] [bibtex-entry]


  14. A. Gotlieb and M. Petit. Path-oriented random testing.. In Proceedings of the International Workshop on Random Testing, Portland, USA, pages 28-35, July 2006. ACM.
    Test campaigns usually require only a restricted subset of paths in a program to be thoroughly tested. As random testing (RT) offers interesting fault-detection capacities at low cost, we face the problem of building a sequence of random test data that execute only a subset of paths in a program. We address this problem with an original technique based on backward symbolic execution and constraint propagation to generate random test data based on an uniform distribution. Our approach derives path conditions and computes an over-approximation of thier associated subdomain to find such a uniform sequence. The challenging problem consists in building efficiently a path-oriented random test data generator by minimizing the number of rejets within the generated random sequence. Our first experimental resluts, conducted over a few academic expaples, clearly show a dramatic improvement of our approach over classical random testing.
    [bibtex-key = GP06] [bibtex-entry]


  15. S. Gouraud and A. Gotlieb. Using CHRs to generate test cases for the JCVM. In Eighth International Symposium on Practical Aspects of Declarative Languages, PADL 06, volume 3819 of Lecture Notes in Computer Science, Charleston, South Carolina, January 2006. Keyword(s): CHR, Software testing, Java Card Virtual Machine.
    Automated functional testing consists in deriving test cases from the specification model of a program to detect faults within an implementation. In our work, we investigate using Constraint Handling Rules (CHRs) to automate the test cases generation process of functional testing. Our case study is a formal model of the Java Card Virtual Machine (JCVM) written in a sub-language of the Coq proof assistant. In this paper we define an automated translation from this formal model into CHRs and propose to generate test cases for each bytecode definition of the JCVM. The originality of our approach resides in the use of CHRs to faithfully model the formally specified operational semantics of the JCVM. The approach has been implemented in Eclipse Prolog and a full set of test cases have been generated for testing the JCVM.
    [bibtex-key = GG06] [bibtex-entry]


  16. G. Le Guernic, A. Banerjee, T. Jensen, and D. Schmidt. Automata-based Confidentiality Monitoring. In Proceedings of the Annual Asian Computing Science Conference, volume 4435 of Lecture Notes in Computer Science, DEC 6--8 2006. Note: To appear. [PDF]
    Non-interference is typically used as a baseline security policy to formalize confidentiality of secret information manipulated by a program. In contrast to static checking of non-interference, this paper considers dynamic, automaton-based, monitoring of information flow for a single execution of a sequential program. The mechanism is based on a combination of dynamic and static analyses. During program execution, abstractions of program events are sent to the automaton, which uses the abstractions to track information flows and to control the execution by forbidding or editing dangerous actions. The mechanism proposed is proved to be sound, to preserve executions of well-typed programs (in the security type system of Volpano, Smith and Irvine), and to preserve some safe executions of ill-typed programs.
    [bibtex-key = LeGuernic2006acm] [bibtex-entry]


  17. Gurvan Le Guernic, Anindya Banerjee, Thomas Jensen, and David Schmidt. Automaton-based Confidentiality Monitoring. In 11th Annual Asian Computing Science Conference (ASIAN'06), volume 4435 of Lecture Notes in Computer Science, Tokyo, Japan, December 2006. Note: To appear. [bibtex-key = GBJS06] [bibtex-entry]


  18. M. Petit and A. Gotlieb. Raisonner et filtrer avec un choix probabiliste partiellement connu. In Deuxièmes Journées Francophones de Programmation par Contraintes, Nîmes, France, Juin 2006. [bibtex-key = PG06] [bibtex-entry]


  19. Pascal Sotin, David Cachera, and Thomas Jensen. Quantitative Static Analysis over semirings: analysing cache behaviour for Java Card. In 4th International Workshop on Quantitative Aspects of Programming Languages (QAPL 2006), volume 164 of Electronic Notes in Theoretical Computer Science, pages 153-167, 2006. Elsevier. [bibtex-key = SCJ06] [bibtex-entry]


  20. G. Le Guernic, A. Banerjee, and D. Schmidt. Automaton-based Non-interference Monitoring. Technical report 2006-1, Department of Computing and Information Sciences, College of Engineering, Kansas State University, 234 Nichols Hall, Manhattan, KS 66506, USA, April 2006. [WWW]
    This report presents a non-interference monitoring mechanism for sequential programs. Non-interference is a property of the information flows of a program. It implies the respect of the confidentiality of the secret information manipulated. The approach taken uses an automaton based monitor. During the execution, abstractions of the events occurring are sent to the automaton. The automaton uses those inputs to track the information flows and to control the execution by forbidding or editing dangerous actions. The mechanism proposed is proved to be sound and more efficient than a type system similar to the historical one developed by Volpano, Smith and Irvine.
    [bibtex-key = LeGuernic2006abnim] [bibtex-entry]


  21. G. Le Guernic, A. Banerjee, and D. Schmidt. Automaton-based Non-interference Monitoring. Technical report 2006-1, Department of Computing and Information Sciences, College of Engineering, Kansas State University, 234 Nichols Hall, Manhattan, KS 66506, USA, April 2006.
    This report presents a non-interference monitoring mechanism for sequential programs. Non-interference is a property of the information flows of a program. It implies the respect of the confidentiality of the secret information manipulated. The approach taken uses an automaton based monitor. During the execution, abstractions of the events occurring are sent to the automaton. The automaton uses those inputs to track the information flows and to control the execution by forbidding or editing dangerous actions. The mechanism proposed is proved to be sound and more efficient than a type system similar to the historical one developed by Volpano, Smith and Irvine.
    [bibtex-key = LeGuernic2006abnim] [bibtex-entry]


  22. Y. Glouche and T. Genet. SPAN -- A Security Protocol ANimator for AVISPA -- User Manual. IRISA / Université de Rennes 1, 2006. [WWW] [bibtex-key = span-manual] [bibtex-entry]


2005

  1. David Pichardie. Interprétation abstraite en logique intuitionniste : extraction d'analyseurs Java certiés. PhD thesis, Université Rennes 1, Rennes, France, December 2005. [bibtex-key = PicThe05] [bibtex-entry]


  2. F. Besson, T. de Grenier de Latour, and T. Jensen. Interfaces for stack inspection. Journal of Functional Programming, 15(2):179-217, 2005. [WWW]
    Stack inspection is a mechanism for programming secure applications in the presence of code from various protection domains. Run-time checks of the call stack allow a method to obtain information about the code that (directly or indirectly) invoked it in order to make access control decisions. This mechanism is part of the security architecture of Java and the .NET Common Language Runtime. A central problem with stack inspection is to determine to what extent the local checks inserted into the code are sufficient to guarantee that a global security property is enforced. A further problem is how such verification can be carried out in an incremental fashion. Incremental analysis is important for avoiding re-analysis of library code every time it is used, and permits the library developer to reason about the code without knowing its context of deployment. We propose a technique for inferring interfaces for stack-inspecting libraries in the form of secure calling context for methods. By a secure calling context we mean a pre-condition on the call stack sufficient for guaranteeing that execution of the method will not violate a given global property. The technique is a constraint-based static program analysis implemented via fixed point iteration over an abstract domain of linear temporal logic properties.
    [bibtex-key = Besson:05:Interfaces] [bibtex-entry]


  3. David Cachera, Thomas Jensen, David Pichardie, and Vlad Rusu. Extracting a Data Flow Analyser in Constructive Logic. Theoretical Computer Science, 2005.
    We show how to formalise a constraint-based data flow analysis in the specification language of the Coq proof assistant. This involves defining a dependent type of lattices together with a library of lattice functors for modular construction of complex abstract domains. Constraints are expressed in an intermediate representation that allows for both efficient constraint resolution and correctness proof of the analysis with respect to an operational semantics. The proof of existence of a correct, minimal solution to the constraints is constructive which means that the extraction mechanism of Coq provides a provably correct data flow analyser in Ocaml. The library of lattices together with the intermediate representation of constraints are defined in an analysis-independent fashion that provides a basis for a generic framework for proving and extracting static analysers in Coq.
    [bibtex-key = CaJePiRu05TCSExtractAnalyser] [bibtex-entry]


  4. David Cachera and Katell Morin-Allory. Verification of safety properties for parameterized regular systems. Trans. on Embedded Computing Systems, 4(2):228-266, 2005.
    We propose a combination of heuristic methods to prove properties of control signals for regular systems defined by means of affine recurrence equations (AREs). We benefit from the intrinsic regularity of the underlying polyhedral model to handle parameterized systems in a symbolic way. Our techniques apply to safety properties. The general proof process consists in an iteration that alternates two heuristics. We are able to identify the cases when this iteration will stop in a finite number of steps. These techniques have been implemented in a high level synthesis environment based on the polyhedral model.
    [bibtex-key = CacMorTECS05] [bibtex-entry]


  5. David Cachera, Thomas Jensen, David Pichardie, and Gerardo Schneider. Certified Memory Usage Analysis. In Proc of 13th International Symposium on Formal Methods (FM'05), Lecture Notes in Computer Science, 2005. Springer-Verlag.
    We present a certified algorithm for resource usage analysis, applicable to languages in the style of Java byte code. The algorithm verifies that a program executes in bounded memory. The algorithm is destined to be used in the development process of applets and for enhanced byte code verification on embedded devices. We have therefore aimed at a low-complexity algorithm derived from a loop detection algorithm for control flow graphs. The expression of the algorithm as a constraint-based static analysis of the program over simple lattices provides a link with abstract interpretation that allows to state and prove formally the correctness of the analysis with respect to an operational semantics of the program. The certification is based on an abstract interpretation framework implemented in the Coq proof assistant which has been used to provide a complete formalisation and formal verification of all correctness proofs.
    [bibtex-key = CaJePiSc05MemoryUsage] [bibtex-entry]


  6. David Cachera, Thomas Jensen, and Pascal Sotin. Estimating Cache Misses with Semi-Modules and Quantitative Abstraction. In Proc. of IST-APPSEM II Workshop on Applied Semantics, 2005. [bibtex-key = Sotin05] [bibtex-entry]


  7. T. Denmat, M. Ducassé, and O. Ridoux. Data mining and cross-checking of execution traces. A re-interpretation of Jones, Harrold and Stasko test information visualization. In T. Ellman and A. Zisman, editors, Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering, November 2005. ACM Press. Note: See RR-5661 for a long version of this article. Keyword(s): Software Engineering, Debugging, Artificial Intelligence, Learning, Knowledge acquisition.
    The current trend in debugging and testing is to cross-check information collected during several executions. Jones et al., for example, propose to use the instruction coverage of passing and failing runs in order to visualize suspicious statements. This seems promising but lacks a formal justification. In this paper, we show that the method of Jones et al. can be re-interpreted as a data mining procedure. More particularly, they define an indicator which characterizes association rules between data. With this formal framework we are able to explain intrinsic limitations of the above indicator.
    [bibtex-key = denmat05b] [bibtex-entry]


  8. T. Denmat, A. Gotlieb, and M. Ducassé. Proving or Disproving Likely Invariants with Constraint Reasoning. In A. Serebrenik, editor, Proceedings of the 15th Workshop on Logic-based Method for Programming Environments, Sitges, SPAIN, October 2005. Note: Satelite event of International Conference on Logic Programming (ICLP'2005). Published in Computer Research Repository cs.SE/0508108. [WWW] Keyword(s): Software Engineering, Testing and Debugging, Program verification, Constraint and logic languages.
    A program invariant is a property that holds for every execution of the program. Recent work suggest to infer likely-only invariants, via dynamic analysis. A likely invariant is a property that holds for some executions but is not guaranteed to hold for all executions. In this paper, we present work in progress addressing the challenging problem of automatically verifying that likely invariants are actual invariants. We propose a constraint-based reasoning approach that is able, unlike other approaches, to both prove or disprove likely invariants. In the latter case, our approach provides counter-examples. We illustrate the approach on a motivating example where automatically generated likely invariants are verified.
    [bibtex-key = DGD05] [bibtex-entry]


  9. A. Gotlieb, T. Denmat, and B. Botella. Constraint-based test data generation in the presence of stack-directed pointers. In 20th IEEE/ACM International Conference on Automated Software Engineering (ASE'05), Long Beach, CA, USA, Nov. 2005. Note: 4 pages, short paper. Keyword(s): Constraint-Based Test data generation, Constraint satisfaction, stack-directed pointers.
    Constraint-Based Test data generation (CBT) exploits constraint satisfaction techniques to generate test data able to kill a given mutant or to reach a selected branch in a program. When pointer variables are present in the program, aliasing problems may arise and may lead to the failure of current CBT approaches. In our work, we propose an overall CBT method that exploits the results of an intraprocedural points-to analysis and provides two specific constraint combinators for automatically generating test data able to reach a selected branch. Our approach correctly handles multi-levels stack-directed pointers that are mainly used in real-time control systems. The method has been fully implemented in the test data generation tool INKA and first experiences in applying it to a variety of existing programs tend to show the interest of the approach.
    [bibtex-key = GDB05b] [bibtex-entry]


  10. A. Gotlieb, T. Denmat, and B. Botella. Goal-oriented test data generation for programs with pointer variables. In 29th IEEE Annual International Computer Software and Applications Conference (COMPSAC'05), Edinburh, Scotland, pages 449-454, July 2005. Note: 6 pages. Keyword(s): Goal-oriented test data generation, Constraint Logic Programming, Static Single Assignment form, pointer variables.
    Automatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs goal-oriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. This paper proposes to extend an existing goal-oriented test data generation technique to deal with multi-level pointer variables. The approach exploits the results of an intraprocedural flow-sensitive points-to analysis to automatically generate goal-oriented test data at the unit testing level. Implementation is in progress and a few examples are presented.
    [bibtex-key = GDB05a] [bibtex-entry]


  11. S.D. Gouraud and A. Gotlieb. Utilisation des CHRs pour générer des cas de test fonctionnel pour la Machine Virtuelle Java Card. In Premières Journées Francophones de Programmation par Contraintes (JFPC'05), Lens, FRANCE, 8-10 juin 2005.
    Le test fonctionnel basé sur une spécification formelle consiste à dériver des cas de test à partir d'un modèle formel pour détecter des fautes dans son implémentation. Dans nos travaux, nous étudions l'utilisation des \emph{Constraint Handling Rules} (CHRs) pour automatiser la génération de cas de test fonctionnel basée sur un modèle formel. Notre étude de cas est un modèle de la spécification de la Machine Virtuelle Java Card (JCVM) écrite dans un sous ensemble du langage Coq. Dans cet article, nous définissons une traduction automatique de ce modèle sous forme de règles CHR dans le but de générer des cas de test pour la JCVM. Le point clé de notre approche réside dans l'utilisation des \emph{deep guards} pour modéliser fidèlement la sémantique de notre modèle formel de la JCVM. Ensuite, nous proposons une approche globale pour la génération de cas de test fonctionnel basée sur les CHRs qui peut \^etre appliquée à d'autres modèles formels.
    [bibtex-key = GG05] [bibtex-entry]


  12. G. Le Guernic and T. Jensen. Monitoring Information Flow. In Andrei Sabelfeld, editor, Proceedings of the 2005 Workshop on Foundations of Computer Security (FCS'05), pages 19-30, June 2005. DePaul University. Note: LICS'05 Affiliated Workshop.
    We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which are safe and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data.
    [bibtex-key = LeGuernic2005mif] [bibtex-entry]


  13. G. Le Guernic and T. Jensen. Monitoring Information Flow. In Andrei Sabelfeld, editor, Proceedings of the Workshop on Foundations of Computer Security, pages 19-30, June 2005. DePaul University. [PDF]
    We present an information flow monitoring mechanism for sequential programs. The monitor executes a program on standard data that are tagged with labels indicating their security level. We formalize the monitoring mechanism as a big-step operational semantics that integrates a static information flow analysis to gather information flow properties of non-executed branches of the program. Using the information flow monitoring mechanism, it is then possible to partition the set of all executions in two sets. The first one contains executions which \emph{are safe} and the other one contains executions which may be unsafe. Based on this information, we show that, by resetting the value of some output variables, it is possible to alter the behavior of executions belonging to the second set in order to ensure the confidentiality of secret data.
    [bibtex-key = LeGuernic2005mif] [bibtex-entry]


  14. G. Le Guernic and J. Perret. FL-system's Intelligent Cache. In Alexandre Vautier and Sylvie Saget, editors, Proceedings of Majecstic 2005, pages 79-88, November 2005. [PDF]
    Cet article présente une application de techniques issues du génie logiciel à la modélisation d'environnements virtuels. Ces derniers, dès lors qu'ils sont complexes, constituent des masses de données particulièrement volumineuses et, de ce fait, difficiles à manipuler. De plus, les descriptions exhaustives de ces environnements, c'est-à-dire explicites, sont peu évolutives. Pour résoudre ces problèmes, il existe des méthodes basées sur des systèmes de réécriture permettant de décrire de fac¸on générique les objets de l'environnement. Chaque objet est ainsi décrit par un axiome qui est réécrit lors de la génération de l'environnement. Lors de cette phase, il est fréquent de retrouver deux séquences de réécriture aboutissant à un même objet. Il est alors possible d'utiliser un mécanisme tel que le cache afin d'améliorer les performances en profitant des calculs déjà effectués. Cependant, se contenter de mettre en correspondance les noms et paramètres des axiomes n'est pas suffisant pour établir l'ensemble des réécritures identiques. En effet, les réécritures d'un même axiome avec des paramètres effectifs différents peuvent aboutir au même objet. Le système proposé dans cet article effectue une analyse dynamique des réécritures afin d'améliorer les performances du cache en d´etectant de tels cas. La première partie de cet article présente le système de réécriture, la seconde l'analyse utilisée, et la dernière le mécanisme proposé.
    [bibtex-key = leguernic05flic] [bibtex-entry]


  15. G. Le Guernic and Julien Perret. FL-system's Intelligent Cache. In Alexandre Vautier and Sylvie Saget, editors, Proceedings of Majecstic 2005, pages 79-88, November 2005. [bibtex-key = leguernic05flic] [bibtex-entry]


  16. Katell Morin-Allory and David Cachera. Proving Parameterized Systems: The Use of Pseudo-Pipelines in Polyhedral Logic. In Proc. of 13th CHARME conference, Lecture Notes in Computer Science, pages 376-379, 2005. Springer-Verlag. [bibtex-key = MorCacCHARME05] [bibtex-entry]


  17. David Pichardie. Modular proof principles for parameterized concretizations. In Proc. of 2nd International Workshop on Construction and Analysis of Safe, Secure and Interoperable Smart Devices (CASSIS 2005), number 3956 of Lecture Notes in Computer Science, pages 138-154, 2005. Springer-Verlag. [bibtex-key = Pich05ParamConcr] [bibtex-entry]


  18. F. Besson, T. Jensen, and D. Pichardie. A PCC Architecture based on Certified Abstract Interpretation. Research Report RR-5751, INRIA, November 2005. Note: Also Publication Interne IRISA PI-1764. [PDF] [bibtex-key = RR-5751] [bibtex-entry]


  19. David Cachera and Katell Morin-Allory. Proving Parameterized Systems: the use of a widening operator and pseudo-pipelines in polyhedral logic. Technical report TIMA-RR--05/04-01--FR, TIMA-IMAG, 2005.
    We propose proof techniques and tools for the formal verification of regular parameterized systems. These systems are expressed in the polyhedral model, which combines affine recurrence equations with index sets of polyhedral shape. We extend a previously defined proof system based on a polyhedral logic with the detection of pseudo-pipelines, that are particular patterns in the variable definitions generalizing the notion of pipeline. The combination of pseudo-pipeline detection with the use of a simple widening operator greatly improves the effectiveness of our proof techniques.
    [bibtex-key = CacMorRR05] [bibtex-entry]


  20. T. Denmat, M. Ducassé, and O. Ridoux. Data Mining and Cross-checking of Execution Traces. A re-interpretation of Jones, Harrold and Stasko test information visualization (Long version). Research Report RR-5661, INRIA, August 2005. Note: Also Publication Interne IRISA PI-1743. [WWW] [bibtex-key = denmat05c] [bibtex-entry]


  21. G. Le Guernic. Roles & Security, October 2005. Note: Dagstuhl Seminar 03411: Language-Based Security. [WWW] [bibtex-key = LeGuernic2003ras_misc] [bibtex-entry]


2004

  1. M. Huisman and T. Jensen, editors. J. Logic and Algebraic Programming. Special issue on Formal Methods for Smart Cards, vol. 58(1-2), 2004. Elsevier. [bibtex-key = Huisman:04:JLAPSmartCards] [bibtex-entry]


  2. B. Morin. Corrélation d'alertes issues d'outils de détection d'intrusions avec prise en compte d'informations sur le système surveillé. PhD thesis, INSA de Rennes, Février 2004. [bibtex-key = Morin04] [bibtex-entry]


  3. Katell Morin-Allory. Vérification formelle dans le modèle polyédrique. PhD thesis, Université de Rennes 1, Octobre 2004. [bibtex-key = alloy-morin04] [bibtex-entry]


  4. M. Eluard and T. Jensen. Vérification du contrôle d'accès dans des cartes à puce multi-application. Technique et Science Informatiques, 23(3):323-358, 2004. [bibtex-key = Eluard:04:TSI] [bibtex-entry]


  5. G. Feuillade, T. Genet, and V. Viet Triem Tong. Reachability Analysis over Term Rewriting Systems. Journal of Automated Reasoning, 33 (3-4):341-383, 2004. [WWW] Keyword(s): Term Rewriting, Reachability, Tree Automata, Approximations, Verification, Timbuk, Abstract Interpretation.
    This paper surveys some techniques and tools for achieving reachability analysis over term rewriting systems. The core of those techniques is a generic tree automata completion algorithm used to compute in an exact or approximated way the set of descendants (or reachable terms). This algorithm has been implemented in the Timbuk tool. Furthermore, we show that many classes with regular sets of descendants of the literature corresponds to specific instances of the tree automata completion algorithm and can thus be efficiently computed by Timbuk. An extension of the completion algorithm to conditional term rewriting systems and some applications are also presented.
    [bibtex-key = FeuilladeGVTT-JAR04] [bibtex-entry]


  6. T. Jensen and J.-L. Lanet. Modélisation et vérification dans les cartes à puce. Revue d'électricité et de l'électronique, 6/7:89-94, 2004. [bibtex-key = Jensen:04:CartesAPuce] [bibtex-entry]


  7. David Cachera, Thomas Jensen, David Pichardie, and Vlad Rusu. Extracting a data flow analyser in constructive logic. In Proc. ESOP'04, number 2986 of Springer LNCS, pages 385 - 400, 2004. [bibtex-key = CachJen:esop04] [bibtex-entry]


  8. Matthieu Petit and Arnaud Gotlieb. An ongoing work on statistical structural testing via probabilistic concurrent constraint programming. In Proc. of SIVOES-MODEVA workshop, St Malo, France, November 2004. IEEE.
    The use of a model to describe and test the expected behavior of a program is a well-proved software testing technique. Statistical structural testing aims at building a model from which an input probability distribution can be derived that maximizes the coverage of some structural criteria by a random test data generator. Our approach consists in converting statistical structural testing into a Probabilistic Concurrent Constraint Programming (PCCP) problem in order 1) to exploit the high declarativity of the probabilistic choice operators of this paradigm and 2) to benefit from its automated constraint solving capacity. This paper reports on an ongoing work to implement PCCP and exploit it to solve instances of statistical structural testing problems. Application to testing Java Card applets is discussed.
    [bibtex-key = PG04] [bibtex-entry]


  9. Matthieu Petit and Arnaud Gotlieb. Probabilistic choice operators as global constraints : application to statistical software testing. In Poster presentation in ICLP'04, number 3132 of Springer LNCS, pages 471 - 472, 2004. [bibtex-key = PetitGotlieb:04:poster] [bibtex-entry]


2003

  1. V. Viet Triem Tong. Automates d'arbres et réécriture pour l'étude de problèmes d'accessibilité. PhD thesis, Université Rennes 1, 2003. [bibtex-key = VietTriemTong-PhD03] [bibtex-entry]


  2. A. Banerjee and T. Jensen. Control-flow analysis with rank-2 intersection types. Mathematical Structures in Computer Science, 13(1):87-124, 2003. [bibtex-key = Banerjee:03:Rank2] [bibtex-entry]


  3. F. Spoto and T. Jensen. Class Analyses as Abstract Interpretations of Trace Semantics. ACM Transactions on Programming Languages and Systems (TOPLAS), 25(5):578-630, 2003. [bibtex-key = Spoto:ClassAnalysis:03] [bibtex-entry]


  4. F. Besson and T. Jensen. Modular Class Analysis with DATALOG. In R. Cousot, editor, Proc. of 10th Static Analysis Symposium (SAS 2003), pages 19-36, 2003. Springer LNCS vol. 2694. [WWW]
    datalog can be used to specify a variety of class analyses for object oriented programs as variations of a common framework. In this framework, the result of analyzing a class is a set of datalog clauses whose least fixpoint is the information analysed for. Modular class analysis of program fragments is then expressed as the resolution of open datalog programs. We provide a theory for the partial resolution of sets of open clauses and define a number of operators for reducing such open clauses.
    [bibtex-key = Besson:03:Modular] [bibtex-entry]


  5. David Cachera and Katell Morin-Allory. Verification of Control Properties in the Polyhedral Model. In Proc. 1st MEMOCODE conference, Mont-St-Michel, France, June 2003. [WWW] [bibtex-key = CacMor03] [bibtex-entry]


  6. David Cachera and David Pichardie. Embedding of Systems of Affine Recurrence Equations in Coq. In Proc. TPHOLs 2003, 16th International Conference on Theorem Proving in Higher Order Logics, LNCS, Rome, Italy, September 2003. [WWW] [bibtex-key = CacPic03] [bibtex-entry]


  7. G. Feuillade and T. Genet. Reachability in conditional term rewriting systems. In FTP'2003, International Workshop on First-Order Theorem Proving, volume 86 n. 1 of Electronic Notes in Theoretical Computer Science, June 2003. Elsevier. [WWW] Keyword(s): Term Rewriting, Conditional Term Rewriting, Reachability, Tree Automata. [bibtex-key = FeuilladeG-FTP03] [bibtex-entry]


  8. T. Genet, T. Jensen, V. Kodati, and D. Pichardie. A Java Card CAP Converter in PVS. In Proceedings of the 2nd International Workshop on Compiler Optimization Meets Compiler Verification (COCV 2003), 2003. ENTCS 82(2). [WWW] Keyword(s): Java, Java Card, CAP Format, Compiler, Verification, Proof Assistant, PVS. [bibtex-key = GenetJKP-COCV03] [bibtex-entry]


  9. T. Genet, Y.-M. Tang-Talpin, and V. Viet Triem Tong. Verification of Copy Protection Cryptographic Protocol using Approximations of Term Rewriting Systems. In In Proceedings of Workshop on Issues in the Theory of Security, 2003. [WWW] Keyword(s): Cryptographic Protocol, Verification, Term Rewriting, Reachability, Approximation, Timbuk. [bibtex-key = GenetTTVTT-wits03] [bibtex-entry]


  10. Arnaud Gotlieb. Exploiting Symmetries to Test Programs. In Proc. of 14th IEEE International Symposium on Software Reliability Engineering (ISSRE 2003), Denver, Colorado, USA, 2003. Note: 17th to 20th November. [bibtex-key = Gotlieb:03:SymTest] [bibtex-entry]


  11. A. Gotlieb and B. Botella. Automated Metamorphic Testing. In Proc. of the 27th IEEE Annual International Computer Software and Applications Conference (COMPSAC), Dallas, TX, USA, 2003. Note: 3th to 7th November. [bibtex-key = Gotlieb:03:MetaTest] [bibtex-entry]


  12. Lionel van Aertryck and Thomas Jensen. UML-CASTING: Test synthesis from UML models using constraint resolution. In J-M. Jézéquel, editor, Proc. Approches Formelles dans l'Assistance au Développement de Logiciels (AFADL'2003), 2003. INRIA. [bibtex-key = Aertryck:03:UMLCasting] [bibtex-entry]


  13. David Cachera and Katell Morin-Allory. Verification of Control Properties in the Polyhedral Model. Technical report 1515, INRIA, 2003. [WWW] [bibtex-key = CacMorRR03] [bibtex-entry]


  14. David Cachera and David Pichardie. Proof Tactics for the Verification of Structured Systems of Affine Recurrence Equations. Technical report 1511, INRIA, 2003. [WWW] [bibtex-key = CacPicRR03] [bibtex-entry]


  15. G. Feuillade, T. Genet, and V. Viet Triem Tong. Reachability Analysis over Term Rewriting Systems. Technical report RR-4970, INRIA, 2003. [WWW] Keyword(s): Term Rewriting, Reachability, Tree Automata, Approximations, Verification, Timbuk, Abstract Interpretation.
    This paper surveys some techniques and tools for achieving reachability analysis over term rewriting systems. The core of those techniques is a generic tree automata completion algorithm used to compute in an exact or approximated way the set of descendants (or reachable terms). This algorithm has been implemented in the Timbuk tool. Furthermore, we show that classes with regular sets of descendants of the literature corresponds to specific instances of the tree automata completion algorithm and can thus be efficiently computed by Timbuk. An extension of the completion algorithm to conditional term rewriting systems and some applications are also presented.
    [bibtex-key = FeuilladeGVTT-RR03] [bibtex-entry]


  16. T. Genet. Timbuk 2.0 -- A Tree Automata Library -- Reference Manual and Tutorial. 2003. Note: 81 pages. [WWW] [bibtex-key = Timbuk-manual] [bibtex-entry]


2002

  1. F. Besson. Analyse modulaire de programmes. PhD thesis, Université de Rennes 1, 2002. [WWW] [bibtex-key = Besson:02:PhD] [bibtex-entry]


  2. Siegfried Rouvrais. Utilisation d'agents mobiles pour la construction de services distribués. PhD thesis, Université de Rennes 1, juillet 2002. Note: N. d'ordre : 2614. [bibtex-key = Rouvrais02a] [bibtex-entry]


  3. T. Jensen. Types in program analysis. In Torben Mogensen, David Schmidt, and I. Hal Sudborough, editors, The Essence of Computation: Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones, Lecture Notes in Computer Science 2566, pages 204-222. Springer-Verlag, 2002. [bibtex-key = Jensen:02:Types] [bibtex-entry]


  4. E. Denney and T. Jensen. Correctness of Java Card method lookup via logical relations. Theoretical Computer Science, 283:305-331, 2002. [bibtex-key = Denney:02:Correctness] [bibtex-entry]


  5. F. Besson, Thomas de Grenier de Latour, and T. Jensen. Secure calling contexts for stack inspection. In Proc. of 4th Int Conf. on Principles and Practice of Declarative Programming (PPDP 2002), pages 76-87, 2002. ACM Press. [WWW]
    Stack inspection is a mechanism for programming secure applications by which a method can obtain information from the call stack about the code that (directly or indirectly) invoked it. This mechanism plays a fundamental role in the security architecture of Java and the .NET Common Language Runtime. A central problem with stack inspection is to determine to what extent the local checks inserted into the code are sufficient to guarantee that a global security property is enforced. In this paper, we present a technique for inferring a secure calling context for a method. By a secure calling context we mean a pre-condition on the call stack sufficient for guaranteeing that execution of the method will not violate a given global property. This is particularly useful for annotating library code in order to avoid having to re-analyse libraries for every new application. The technique is a constraint based static program analysis implemented via fixed point iteration over an abstract domain of linear temporal logic properties.
    [bibtex-key = Besson:02:CallingContexts] [bibtex-entry]


  6. Thomas Jensen, Florimond Ployette, and Olivier Ridoux. Iteration schemes for fixed point conputation. In A. Ingolfsdottir and Z. Esik, editors, Proc. of 4th Int workshop on Fixed Points in Computer Science (FICS'02), pages 69-76, 2002. [bibtex-key = Jensen:02:Iteration] [bibtex-entry]


  7. Marc Éluard and Thomas Jensen. Secure object flow analysis for Java Card. In Proc. of 5th Smart Card Research and Advanced Application Conference (Cardis'02), 2002. IFIP/USENIX. [bibtex-key = Eluard:02:SecureObjectFlow] [bibtex-entry]


  8. T. Genet and V. Viet Triem Tong. Proving Negative Conjectures on Equational Theories using Induction and Abstract Interpretation. Technical report RR-4576, INRIA, 2002. [WWW] Keyword(s): Equational theories, proof by induction, abstract interpretation, rewriting.
    In this paper we present a method to prove automatically initial negative properties on equational specifications. This method tends to combine induction and abstract interpretation. Induction is performed in a classical way using cover sets. Abstract interpretation is done using an additional set of equations used to approximate the initial model into an abstract one. Like in the methods dedicated to the proof by induction of positive properties, the equational specification is supposed to be oriented into a terminating, confluent and complete term rewriting system.
    [bibtex-key = GenetVTTong-RR02] [bibtex-entry]


2001

  1. Isabelle Attali and Thomas Jensen, editors. Smart Card Programming and Security (e-Smart 2001), September 2001. Springer LNCS vol. 2140. [bibtex-key = Attali:01:EsmartProceedings] [bibtex-entry]


  2. Marc Éluard. Analyse de sécurité pour la certification d'applications Java Card. PhD thesis, Université de Rennes 1, December 2001. Note: N. d'ordre : 2614. [bibtex-key = Eluard:01:PhD] [bibtex-entry]


  3. F. Besson, T. Jensen, D. Le Métayer, and T. Thorn. Model ckecking security properties of control flow graphs. Journal of Computer Security, 9:217-250, 2001. [WWW]
    A fundamental problem in software-based security is whether local security checks inserted into the code are sufficient to implement a global security property. This article introduces a formalism based on a linear-time temporal logic for specifying global security properties pertaining to the control flow of the program, and illustrates its expressive power with a number of existing properties. We define a minimalistic, security-dedicated program model that only contains procedure call and run-time security checks and propose an automatic method for verifying that an implementation using local security checks satisfies a global security property. We then show how to instantiate the framework to the security architecture of Java 2 based on stack inspection and privileged method calls.
    [bibtex-key = Besson:01:ModelChecking] [bibtex-entry]


  4. T. Genet and Valérie Viet Triem Tong. Reachability Analysis of Term Rewriting Systems with Timbuk. In 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning, volume 2250 of Lectures Notes in Artificial Intelligence, pages 691-702, 2001. Springer Verlag. [WWW] Keyword(s): Timbuk, Term Rewriting, Reachability, Tree Automata, Descendants, Verification.
    We present Timbuk -- a tree automata library -- which implements usual operations on tree automata as well as a completion algorithm used to compute an over-approximation of the set of descendants R*(E) for a regular set E and a term rewriting system R, possibly non linear and non terminating. On several examples of term rewriting systems representing programs and systems to verify, we show how to use Timbuk to construct their approximations and then prove unreachability properties of these systems.
    [bibtex-key = GenetVTTong-LPAR01] [bibtex-entry]


  5. T. Jensen and F. Spoto. Class analysis of object-oriented programs through abstract interpretation. In F. Honsell and M. Miculan, editors, Proc. of Foundations of Software Science and Computation Structures (FoSSaCS'01), pages 261-275, 2001. Springer LNCS vol .2030. [bibtex-key = Jensen:01:ClassAnalysis] [bibtex-entry]


  6. David Pichardie and Yves Bertot. Formalizing Convex Hulls Algorithms. In Proc. of 14th International Conference on Theorem Proving in Higher Order Logics (TPHOLs'01), number 2152 of Lecture Notes in Computer Science, pages 346-361, 2001. Springer-Verlag. [bibtex-key = BePi01Convexhull] [bibtex-entry]


  7. Igor Siveroni, Thomas Jensen, and Marc Éluard. A Formal Specification of the Java Card Applet Firewall. In Hanne Riis Nielson, editor, Nordic Workshop on Secure IT-Systems, November 2001. [bibtex-key = Siveroni:01:Nordsec] [bibtex-entry]


  8. Marc Éluard, Thomas Jensen, and Ewen Denney. An Operational Semantics of the Java Card Firewall. In Isabelle Attali and Thomas Jensen, editors, Smart Card Programming and Security (e-Smart 2001, volume Springer LNCS vol. 2140, September 2001. [bibtex-key = Eluard:01:Esmart] [bibtex-entry]


  9. Thomas Genet and Valérie Viet Triem Tong. Reachability Analysis of Term Rewriting Systems with Timbuk (extended version). Technical Report RR-4266, INRIA, 2001. [WWW]
    We present Timbuk -- a tree automata library -- which implements usual operations on tree automata as well as a completion algorithm used to compute an over-approximation of the set of descendants R*(E) for a regular set E and a term rewriting system R, possibly non linear and non terminating. On several examples of term rewriting systems representing programs and systems to verify, we show how to use Timbuk to construct their approximations and then prove unreachability properties of these systems.
    [bibtex-key = GenetVTTong-RR01] [bibtex-entry]


2000

  1. I. Attali and T. Jensen, editors. Proceedings of the International Workshop on Java Card (Java Card 2000), Cannes, France, Septembre 2000. Inria. [bibtex-key = Jensen:00:JavaCard] [bibtex-entry]


  2. Pascal Fradet. Approches langages pour la conception et la mise en oeuvre de programmes. document d'habilitation à diriger des recherches, Université de Rennes 1, novembre 2000. [bibtex-key = HDR-PF] [bibtex-entry]


  3. Michaël Périn. Spécifications graphiques multi-vues : formalisation et vérification de cohérence. PhD thesis, IFSIC, October 2000. [bibtex-key = ThesePerin] [bibtex-entry]


  4. E. Denney and T. Jensen. Correctness of Java Card method lookup via logical relations. In Proc. of European Symp. on Programming (ESOP 2000), Lecture Notes in Computer Science, pages 104-118, 2000. Springer. [bibtex-key = Denney:00:Correctness] [bibtex-entry]


  5. T. Genet and F. Klay. Rewriting for Cryptographic Protocol Verification. In Proceedings 17th International Conference on Automated Deduction, volume 1831 of Lecture Notes in Artificial Intelligence, 2000. Springer-Verlag. [WWW] Keyword(s): Cryptographic Protocol, Verification, Term Rewriting, Reachability, Approximation, Timbuk.
    On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.
    [bibtex-key = GenetKlay-CADE00] [bibtex-entry]


  6. T. Genet and F. Klay. Rewriting for Cryptographic Protocol Verification (extended version). Technical Report 3921, INRIA, 2000. [WWW]
    On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.
    [bibtex-key = GenetK-CNET99] [bibtex-entry]


1999

  1. Thomas Jensen. Analyse statiques de programmes : fondements et applications. document d'habilitation à diriger des recherches, Université de Rennes 1, décembre 1999. [bibtex-key = Jensen:99:Hab] [bibtex-entry]


  2. T. Thorn. Vérification de politiques de sécurité par analyse de programmes. PhD thesis, Université de Rennes I, Ifsic, Irisa, février 1999. [bibtex-key = TT99] [bibtex-entry]


  3. F. Besson, T. Jensen, and J.P. Talpin. Polyhedral Analysis for Synchronous Languages. In Gilberto Filé, editor, International Static Analysis Symposium, SAS'99, volume 1694 of Lecture Notes in Computer Science, September 1999. Springer-Verlag. Keyword(s): synchronous languages, abstract interpretation, reachability analysis, infinite state systems.
    We define an operational semantics for the Signal language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.
    [bibtex-key = SAS99:BJT] [bibtex-entry]


  4. T. Jensen, D. Le Métayer, and T. Thorn. Verification of control flow based security properties. In Proc. of the 20th IEEE Symp. on Security and Privacy, pages 89-103, mai 1999. New York: IEEE Computer Society. [bibtex-key = SSP99] [bibtex-entry]


  5. E. Denney, P. Fradet, C. Goire, T. Jensen, and D. Le Métayer. Procédé de vérification de transformateurs de codes pour un système embarqué, notamment sur une carte à puce, juillet 1999. Note: Brevet d'invention. [bibtex-key = brevet99] [bibtex-entry]


1998

  1. J. Mallet. Compilation d'un langage spécialisé pour machine massivement parallèle. PhD thesis, Université de Rennes I, Ifsic, Irisa, 1998. [WWW] Keyword(s): parallelism, compilation, specialized language, program skeleton, data distribution, program transformation, cost analysis.
    The programming languages used for the parallel computers are either efficient languages with explicit parallelism but no portable and very complex to use, either simple portable languages but their compilation is complex and relatively inefficient. We propose a specialized language based on program skeletons encapsulating data and control flow for which an accurate cost analysis of the parallel implementation exists. The compilation process deals with the automatic choice of the data distributions on the processors through the accurate cost guaranteed by the source language. This allows to obtain an automatic compilation with an efficient parallel code (the distributions representing a global choice of parallel implementation). The compilation process is described as a series of program transformations, each transformation mapping one intermediate skeleton-based language into another. The target language is an SPMD-like skeleton-based language straightforwardly translating into a sequential language with calls to communication library routines. The main compilation steps are : the size analysis, the in-place updating transformation, the explicitation of the communications and the data distributions choice.
    [bibtex-key = Mallet98c] [bibtex-entry]


  2. V.-A. Nicolas. Preuves de propriétés de classes de programmes par dérivation systématique de jeux de test. PhD thesis, Université de Rennes I, Ifsic, Irisa, December 1998. [WWW] Keyword(s): Software engineering, program verification, white-box testing, automated test data generation, program analysis, program schemes.
    The problem addressed in this thesis is the automatic generation of test data sets enabling the proof of program properties. We thus place ourselves half-way between the domain of software testing and that of program verification. The work on software testing has lead to easy-to-use semi-automatic tools, but which rely on hypothesis difficult to verify in practice. In the domain of verification, some tools based on formal methods have been developed but they often require an user very experienced with the proof techniques used by the tool. This fact is due to indecidability problems generated by the Turing completeness of the formalisms treated. Our primary contribution is a new approach to program verification, combining the techniques of software testing and static analysis. We propose a formal method for the generation of finite test data sets, allowing one to prove that a program satisfies a given property. This method uses the program source and the property, which must belong to certain class of programs (or properties). These classes are represented by hierarchies of program schemes, which can be seen as modeling some test hypothesis. Every program belonging to one of our schemes and passing the generated test data set satisfy the tested property. For a given property our method is completely automatic and thus does not require any special competence of the user. We have implemented this method in a prototype (for a restricted functional language), for properties expressible in terms of list length.
    [bibtex-key = Nicolas-these98] [bibtex-entry]


  3. T. Jensen. Inference of polymorphic and conditional strictness properties. In Proc. of 25th ACM Symposium on Principles of Programming Languages, pages 209-221, 1998. ACM Press. [WWW]
    We define an inference system for modular strictness analysis of functional programs by extending a conjunctive strictness logic with polymorphic and conditional properties. This extended set of properties is used to define a syntax-directed, polymorphic strictness analysis based on polymorphic recursion whose soundness is established via a translation from the polymorphic system into the conjunctive system. From the polymorphic analysis, an inference algorithm based on constraint resolution is derived and shown complete for variant of the polymorphic analysis. The algorithm deduces at the same time a property and a set of hypotheses on the free variables of an expression which makes it suitable for analysis of program with module structure.
    [bibtex-key = Jensen:98:Inference] [bibtex-entry]


  4. T. Jensen, D. Le Métayer, and T. Thorn. Security and Dynamic Class Loading in Java: A Formalisation. In Proceedings of the 1998 IEEE International Conference on Computer Languages, pages 4-15, May 1998. New York: IEEE Computer Society. [WWW]
    We give a formal specification of the dynamic loading of classes in the Java Virtual Machine (JVM) and of the visibility of members of the loaded classes. This specification is obtained by identifying the part of the run-time state of the JVM that is relevant for dynamic loading and visibility and consists of a set of inference rules defining abstract operations for loading, linking and verification of classes. The formalisation of visibility includes an axiomatisation of the rules for membership of a class under inheritance, and of accessibility of a member in the presence of accessibility modifiers such as exttt{private} and exttt{protected}. The contribution of the formalisation is twofold. First, it provides a clear and concise description of the loading process and the rules for member visibility compared to the informal definitions of the Java language and the JVM. Second, it is sufficiently simple to allow calculations of the effects of load operations in the JVM.
    [bibtex-key = tjdlmtt-iccl98] [bibtex-entry]


  5. T. Jensen, D. Le Métayer, and T. Thorn. Verification of control flow based security policies. Technical report 1210, IRISA, 1998. [WWW] Keyword(s): security, verification, finite-state system, control flow, object orientation.
    A fundamental problem in software-based security is whether extit{local} security checks inserted into the code are sufficient to implement a given extit{global} security policy. We introduce a formalism based on a two-level linear-time temporal logic for specifying global security policies pertaining to the control-flow of the program, and illustrate its expressive power with a number of existing policies. We define a minimalistic, security-dedicated program model that only contains procedure call and dynamic security checks and propose an automatic method for verifying that an implementation using local security constructs satisfies a global security policy. For a given formula in the temporal logic we prove that there exists a bound on the size of the states that have to be considered in order to assure the validity of the formula: this reduces the problem to finite-state model checking. Finally, we instantiate the framework to the security architecture proposed for Java (JDK~1.2).
    [bibtex-key = jensen:98:verification] [bibtex-entry]


1997

  1. V. Gouranton. Dérivation d'analyseurs dynamiques et statiques à partir de spécifications opérationnelles. PhD thesis, Université de Rennes I, Ifsic, Irisa, 1997. [WWW] Keyword(s): analyse dynamique, analyse statique, sémantique naturelle, transformation de programmes, interprétation abstraite, analyse de durée de vie, analyse de nécessité, élagage de programmes.
    Nous présentons un cadre de conception d'analyseurs dynamiques et d'analyseurs statiques à partir de spécifications opérationnelles. Nous proposons des spécifications d'analyseurs en deux parties : la sémantique du langage de programmation considéré et la définition de la propriété recherchée. Nous considérons le cadre de la sémantique naturelle pour la définition du langage de programmation. Ces sémantiques présentent un double avantage : elles sont à la fois compositionnelles et intentionnelles. Les propriétés sont définies comme des fonctions sur les arbres de preuves de la sémantique. Celles qui ne sont pas spécifiques à un langage donné peuvent ainsi être définies une fois pour toutes et spécialisées pour des langages particuliers. La composition de la sémantique et de la propriété définit une analyse dynamique {\em a posteriori} ; il s'agit en effet d'une fonction qui calcule dans un premier temps la trace d'exécution (arbre de preuve) complète d'un programme avant d'en extraire la propriété recherchée. Des transformations de programmes permettent d'obtenir une définition récursive autonome de l'analyseur. Cette fonction est en fait un analyseur dynamique {\em à la volée} dans le sens où il calcule la propriété recherchée au fur et à mesure de l'exécution du programme. On réalise ensuite une abstraction de cet analyseur pour obtenir un analyseur statique : on applique pour ce faire la théorie de l'interprétation abstraite. Nous illustrons notre cadre à l'aide d'exemples d'analyses spécifiques comme la durée de vie d'un langage impératif et l'analyse de nécessité d'un langage fonctionnel d'ordre supérieur. Nous montrons aussi comment une analyse d'intérêt général, en l'occurrence le filtrage de programmes, peut être définie sur un format de sémantique, puis instanciée à différents langages.
    [bibtex-key = Gouranton:97:DerivationD] [bibtex-entry]


  2. T. Jensen. Disjunctive Program Analysis for Algebraic Data Types. ACM Transactions on Programming Languages and Systems, 19(5):752-804, 1997. [WWW]
    We describe how binding-time, data-flow, and strictness analyses for languages with higher-order functions and algebraic data types can be obtained by instantiating a generic program logic and axiomatization of the properties analyzed for. A distinctive feature of the analyses is that disjunctions of program properties are represented exactly. This yields analyses of high precision and provides a logical characterization of abstract interpretations involving tensor products and uniform properties of recursive data structures. An effective method for proving properties of a program based on fixed-point iteration is obtained by grouping logically equivalent formulae of the same type into equivalence classes, obtaining a lattice of properties of that type, and then defining an abstract interpretation over these lattices. We demonstrate this in the case of strictness analysis by proving that the strictness abstract interpretation of a program is the equivalence class containing the strongest property provable of the program in the strictness logic.
    [bibtex-key = Jensen:97:Disjunctive] [bibtex-entry]



BACK TO INDEX

This document was translated from BibTEX by bibtex2html