Previous Up Next

Chapter 5  Conclusion

The framework of logic functors makes it possible for an end-user to design safely a new logic. A theorem prover and its consistency/completeness proof are derived automatically. If necessary the proof exhibits prerequisites that may indicate how to build more consistent/complete variants of the logic. The framework is implemented in CAML, and uses in a crucial way the capability of the ML family to develop parameterized modules (also called functors in that domain). The framework occupies an original position between plain programming in which almost nothing semantic can be proved, and option selection that leaves very little choice [dCFG+01]. We call this original position “gluing”.

The semantics of logic functors is given in a style that is close to the semantics of description logics. We have shown how to rebuild the existing logic  ALC, and how to design new versions of it. However, the semantics style of logic functors does not limit applications to variants of description logics. We have also designed a logic for Logical Information Systems that is based on multisets, valued attributes, concrete domains, boolean operators, and the Closed World Assumption.

Our paper suggests a software architecture for logic-based systems, in which the system is generic in the logic, and the logic component can be separately defined, and plugged in when needed. We have realized a prototype Logical Information System along these lines [FR04]: Camelis, which can be freely downloaded at http://www.irisa.fr/lande/ferre/camelis.

5.1  Related work

Our use of the word functor is similar to ML's one for designating parameterized modules [Mac88b], and in fact our logic functors are ML modules. However, our logic functors are very specialized contrary to functors in ML which are general purpose (in short, we have fixed the signature), and they carry a semantic component. Both the specialization and the semantic component allow us to express composition conditions that are out of the scope of ML functors.

The theory of institutions [GB92] shares our concern for customized logics, and also uses the word functor. However, the focus and theoretical ground are different. Institutions focus on the relation between notations and semantics, whereas we focus on the relation between semantics and operations. In fact, the operations class P is necessary for us to enforce embeddability. The theory of institutions is developed using category theory and in that theory there are functors from signatures to formulas, from signatures to models, and from institutions to institutions. Our logic functors correspond to parameterized institutions.

An important work which shares our motivations is LeanTAP [BP95]. The authors of LeanTAP have also recognized the need for embedding customized logics in applications. To this end, they propose a very concise style of theorem proving, which they call lean theorem proving, and they claim that a theorem prover written in this style is so concise that it is very easy to modify it in order to accomodate a different logic. And indeed, they have proposed a theorem prover for first-order logic, and several variants of it for modal logic, etc. Note that the first-order theorem prover is less than 20 clauses of Prolog. However their claim does not take into account the fact that the average user is not a logic expert. There is no doubt that modifying their first-order theorem prover was easy for these authors, but we also think it could have been undertaken by few others. A hint for this is that it takes a full journal article to revisit and justify the first-order lean theorem prover [Fit98]. So, we think lean theorem proving is an interesting technology, and we have used it to define the logic functor Prop, but it does not actually permit the average user to build a customized logic.

Our main concern is to make sure that logic functors can be composed in a way that ensures the validity of some properties on built logics. This led us to define technical properties that simply tell us how logic functors behave: consistency/completeness, reducedness, etc. This concern is complementary to the concern of actually implementing customized logics, e.g., in logical frameworks like Isabelle [Pau94], Edinburgh LF [HHP93], or Open Mechanized Reasoning Systems [GPT96], or even using a programming language. These frameworks allow users to implement a customized logic, but do not help users in proving the completeness and consistency of the resulting theorem prover. It is not that these frameworks do not help at all. For instance, axiomatic types classes have been introduced in Isabelle [Wen97] in order to permit automatic admissibility check. Another observation is that these frameworks mostly offer environments for interactive theorem proving, which is incompatible with the objective of building fully automatic embeddable logic components.

In essence, our work is more similar to works on static program analysis toolbox (e.g., PAG [AM95]) where authors assemble known results of lattice theory to combine domain operators like product, sets of, and lists in order to build abstract domains and derive automatically a fixed-point solver for these domains. The fact that in the most favourable cases (e.g., Prop(A)), our subsumption relations form (partial) lattices is another connection with these works. However, our framework is more flexible because it permits to build lattices from domains that are not lattices. In particular, the logic functor Prop acts as a lattice completion operator on reduced logics.

5.2  Further Work

Further work is to continue the exploration and the implementation of logics as logic functors. This will certainly lead to the development of new functors. We also plan to extend our logic functors in several ways: We also plan to try and reconstruct more logics, in order to challenge our logic functors and discover new ones; and to explore new applications of them.


Previous Up Next