
D. Bechet,
A. Foret,
and I. Tellier.
Learnability of Pregroup Grammars.
Studia Logica,
87(23),
2007.
Keyword(s): Learning from positive examples,
Pregroup grammars,
Computational linguistics,
parsing,
Categorial Grammars,
constraints.
Abstract:
This paper investigates the learnability by positive examples in the sense of Gold of Pregroup Grammars. In a first part, Pregroup Grammars are presented and a new parsing strategy is prop osed. Then, theoretical learnability and nonlearnability results for subclasses of Pregroup Grammars are proved. In the last two parts, we focus on learning Pregroup Grammars from a sp ecial kind of input called featuretagged examples. A learning algorithm based on the parsing strategy presented in the first part is given. Its validity is proved and its prop erties are examplified. 
@Article{Foret07a,
author = {D. Bechet and A. Foret and I. Tellier},
title = {Learnability of Pregroup Grammars},
journal = {Studia Logica},
volume = {87},
number = {23},
year = 2007,
abstract={ This paper investigates the learnability by positive examples in the sense of Gold of Pregroup Grammars. In a first part, Pregroup Grammars are presented and a new parsing strategy is prop osed. Then, theoretical learnability and nonlearnability results for subclasses of Pregroup Grammars are proved. In the last two parts, we focus on learning Pregroup Grammars from a sp ecial kind of input called featuretagged examples. A learning algorithm based on the parsing strategy presented in the first part is given. Its validity is proved and its prop erties are examplified. },
keywords = {Learning from positive examples, Pregroup grammars, Computational linguistics, parsing, Categorial Grammars, constraints},
}

D. Béchet,
R. Bonato,
A. Dikovsky,
A. Foret,
Y. Le Nir,
E. Moreau,
C. Retoré,
and I. Tellier.
``Modèles algorithmiques de l'acquisition de la syntaxe : concepts et méthodes, résultats et problèmes.
Recherches linguistiques de Vincennes,
2007.
Note: Vol. 37, Presses Universitaires de Vincennes.
[PDF]
Keyword(s): Acquisition syntaxique,
inférence grammaticale,
grammaires catégorielles,
modèle de Gold,
ressources syntaxiques (syntax learning,
grammatical inference,
categorial grammars,
Gold's model,
syntactical resources).
Abstract:
Dans cet article, nous présentons nos résultats récents concernant l'apprentissage de la syntaxe des langues naturelles, en adoptant le point de vue de l'inférence grammaticale symbolique. L'objectif est d'identifier à partir d'exemples, dans une classe de grammaires connue à l'avance, une grammaire particulière qui engendre les dits exemples. Le modèle de Gold fixe les conditions et le critère de réussite d'une telle entreprise : quand un algorithme produisant une grammaire candidate existetil ? quelle structure doivent contenir les exemples : suites de mots, suites de mots étiquetés, arbres d'analyse ? D'un point de vue théorique, nos résultats établissent l'apprenabilité ou la nonapprenabilité de certaines classes de grammaires catégorielles. En pratique, nos résultats permettent aussi d'acquérir automatiquement des ressources syntaxiques à partir de données réelles. Au final, nous discutons de l'intérêt de cette approche pour modéliser l'acquisition de leur langue naturelle par les enfants ainsi que pour construire automatiquement des grammaires électroniques à partir de corpus.
In this paper, we present our recent results on the acquistion of the syntax of natural languages, from the point of view of the theory of grammatical inference. Given a class of possible grammars, the objective is to identify, from a set of positive examples, a grammar in the class which produces the examples. The Gold model formalises the learning process and gives stringent criteria of its success: when does there exist an algorithm producing a target grammar ? what kind of structure should the examples have (strings of words, strings of tagged words, trees) ? From a theoretical point of view, our results establish the learnability or the unlearnability of various classes of categorial grammars. From a practical perspective, these results enable the extraction of syntactic information from real data. Finally, we discuss the interest of this approach for modelling child language acquisition and for automated induction of grammars from corpora. 
@Article{Foret07b,
author = {D. Béchet and R. Bonato and A. Dikovsky and A. Foret and Y. Le~Nir and E. Moreau and C. Retoré and I. Tellier},
title = {``Mod\`eles algorithmiques de l'acquisition de la syntaxe : concepts et méthodes, résultats et probl\`emes"}, journal = {Recherches linguistiques de Vincennes}, note = {Vol. 37, Presses Universitaires de Vincennes}, year = 2007, pdf = {http://rlv.revues.org/document1492.html}, keywords = {Acquisition syntaxique, inférence grammaticale, grammaires catégorielles, mod\`ele de Gold, ressources syntaxiques (syntax learning, grammatical inference, categorial grammars, Gold's model, syntactical resources)}, abstract = { Dans cet article, nous présentons nos résultats récents concernant l'apprentissage de la syntaxe des langues naturelles, en adoptant le point de vue de l'inférence grammaticale symbolique. L'objectif est d'identifier à partir d'exemples, dans une classe de grammaires connue à l'avance, une grammaire particulière qui engendre les dits exemples. Le modèle de Gold fixe les conditions et le critère de réussite d'une telle entreprise : quand un algorithme produisant une grammaire candidate existetil ? quelle structure doivent contenir les exemples : suites de mots, suites de mots étiquetés, arbres d'analyse ? D'un point de vue théorique, nos résultats établissent l'apprenabilité ou la nonapprenabilité de certaines classes de grammaires catégorielles. En pratique, nos résultats permettent aussi d'acquérir automatiquement des ressources syntaxiques à partir de données réelles. Au final, nous discutons de l'intérêt de cette approche pour modéliser l'acquisition de leur langue naturelle par les enfants ainsi que pour construire automatiquement des grammaires électroniques à partir de corpus.
In this paper, we present our recent results on the acquistion of the syntax of natural languages, from the point of view of the theory of grammatical inference. Given a class of possible grammars, the objective is to identify, from a set of positive examples, a grammar in the class which produces the examples. The Gold model formalises the learning process and gives stringent criteria of its success: when does there exist an algorithm producing a target grammar ? what kind of structure should the examples have (strings of words, strings of tagged words, trees) ? From a theoretical point of view, our results establish the learnability or the unlearnability of various classes of categorial grammars. From a practical perspective, these results enable the extraction of syntactic information from real data. Finally, we discuss the interest of this approach for modelling child language acquisition and for automated induction of grammars from corpora. },
}

Olivier Bedel,
Sébastien Ferré,
and Olivier Ridoux.
Exploring a Geographical Dataset with GEOLIS.
In DEXA Work. Advances in Conceptual Knowledge Engineering (ACKE),
pages 540544,
2007.
IEEE Computer Society.
[PDF]
Keyword(s): logical information system,
geographical information system,
information retrieval,
spatial logic.
Abstract:
Geographical data are mainly structured in layers of information. However, this model of organisation is not convenient for navigation inside a dataset, and so limits geographical data exploration to querying. We think information retrieval could be made easier in GIS by the introduction of a navigation based on geographical object properties. For this purpose, we propose a prototype, GEOLIS1, which tightly combines querying and navigation in the search process of geographical data. GEOLIS relies on Logical Information Systems (LIS), which are based on Formal Concept Analysis (FCA) and logics. In this paper, we detail data organisation and navigation process in GEOLIS. We also present the results of an experimentation led on a real dataset. 
@InProceedings{BedFerRid2007a,
author = {Olivier Bedel and Sébastien Ferré and Olivier Ridoux},
title = {Exploring a Geographical Dataset with {GEOLIS}},
booktitle = {DEXA Work. Advances in Conceptual Knowledge Engineering ({ACKE})},
year = {2007},
pages = {540544},
publisher = {IEEE Computer Society},
pdf = {http://www.irisa.fr/LIS/ferre/papers/acke2007.pdf},
keywords = {logical information system, geographical information system, information retrieval, spatial logic},
abstract = {Geographical data are mainly structured in layers of information. However, this model of organisation is not convenient for navigation inside a dataset, and so limits geographical data exploration to querying. We think information retrieval could be made easier in GIS by the introduction of a navigation based on geographical object properties. For this purpose, we propose a prototype, GEOLIS1, which tightly combines querying and navigation in the search process of geographical data. GEOLIS relies on Logical Information Systems (LIS), which are based on Formal Concept Analysis (FCA) and logics. In this paper, we detail data organisation and navigation process in GEOLIS. We also present the results of an experimentation led on a real dataset.},
}

D. Béchet and A. Foret.
Fully Lexicalized Pregroup Grammars.
In Proceedings of WOLLIC 2007,
volume LNCS 4576,
pages 1225,
2007.
Springer.
[PDF]
Keyword(s): Pregroups,
Lambek Categorial Grammars,
Simulation.
Abstract:
Pregroup grammars are a contextfree grammar formalism introduced as a simplification of Lambek calculus. This formalism is interesting for several reasons: the syntactical properties of words are specified by a set of types like the other typebased grammar formalisms ; as a logical model, compositionality is easy ; a polytime parsing algorithm exists. However, this formalism is not completely lexicalized because each pregroup grammar is based on the free pregroup built from a set of primitive types together with a partial order, and this order is not lexical information. In fact, only the pregroup grammars that are based on primitive types with an order that is equality can be seen as fully lexicalized. We show here how we can transform, using a morphism on types, a particular pregroup grammar into another pregroup grammar that uses the equality as the order on primitive types. This transformation is at most quadratic in size (linear for a fixed set of primitive types), it preserves the parse structures of sentences and the number of types assigned to a word. 
@InProceedings{Foret07d,
author = {D. Béchet and A. Foret},
title = {Fully Lexicalized Pregroup Grammars},
booktitle = {Proceedings of WOLLIC 2007},
year = {2007},
volume = {LNCS 4576},
publisher = {Springer},
note = {},
pdf = {http://www.springerlink.com/content/dn21412422761255/?p=8a0b76b8ad414c23a5d75a11ec27652f&pi=25 },
Pages = {1225},
keywords = {Pregroups, Lambek Categorial Grammars, Simulation},
abstract = { Pregroup grammars are a contextfree grammar formalism introduced as a simplification of Lambek calculus. This formalism is interesting for several reasons: the syntactical properties of words are specified by a set of types like the other typebased grammar formalisms ; as a logical model, compositionality is easy ; a polytime parsing algorithm exists. However, this formalism is not completely lexicalized because each pregroup grammar is based on the free pregroup built from a set of primitive types together with a partial order, and this order is not lexical information. In fact, only the pregroup grammars that are based on primitive types with an order that is equality can be seen as fully lexicalized. We show here how we can transform, using a morphism on types, a particular pregroup grammar into another pregroup grammar that uses the equality as the order on primitive types. This transformation is at most quadratic in size (linear for a fixed set of primitive types), it preserves the parse structures of sentences and the number of types assigned to a word. }
}

Peggy Cellier,
Sébastien Ferré,
Olivier Ridoux,
and Mireille Ducassé.
A Parameterized Algorithm for Exploring Concept Lattices.
In S.O. Kuznetsov and S. Schmidt, editors,
Int. Conf. Formal Concept Analysis,
LNAI 4390,
pages 114129,
2007.
Springer.
Keyword(s): concept analysis,
algorithm,
taxonomy.
Abstract:
Formal Concept Analysis (FCA) is a natural framework for learning from positive and negative examples. Indeed, learning from examples results in sets of frequent concepts whose extent contains only these examples. In terms of association rules, the above learning strategy can be seen as searching the premises of exact rules where the consequence is fixed. In its most classical setting, FCA considers attributes as a nonordered set. When attributes of the context are ordered, Conceptual Scaling allows the related taxonomy to be taken into account by producing a context completed with all attributes deduced from the taxonomy. The drawback, however, is that concept intents contain redundant information. In this article, we propose a parameterized generalization of a previously proposed algorithm, in order to learn rules in the presence of a taxonomy. The taxonomy is taken into account during the computation so as to remove all redundancies from intents. Simply changing one component, this parameterized algorithm can compute various kinds of conceptbased rules. We present instantiations of the parameterized algorithm for learning positive and negative rules. 
@InProceedings{CFRD2007,
author = {Peggy Cellier and Sébastien Ferré and Olivier Ridoux and Mireille Ducassé},
title = {A Parameterized Algorithm for Exploring Concept Lattices},
booktitle = {Int. Conf. Formal Concept Analysis},
year = {2007},
editor = {S.O. Kuznetsov and S. Schmidt},
series = {LNAI 4390},
pages = {114129},
publisher = {Springer},
keywords = {concept analysis, algorithm, taxonomy},
abstract = {Formal Concept Analysis (FCA) is a natural framework for learning from positive and negative examples. Indeed, learning from examples results in sets of frequent concepts whose extent contains only these examples. In terms of association rules, the above learning strategy can be seen as searching the premises of exact rules where the consequence is fixed. In its most classical setting, FCA considers attributes as a nonordered set. When attributes of the context are ordered, Conceptual Scaling allows the related taxonomy to be taken into account by producing a context completed with all attributes deduced from the taxonomy. The drawback, however, is that concept intents contain redundant information. In this article, we propose a parameterized generalization of a previously proposed algorithm, in order to learn rules in the presence of a taxonomy. The taxonomy is taken into account during the computation so as to remove all redundancies from intents. Simply changing one component, this parameterized algorithm can compute various kinds of conceptbased rules. We present instantiations of the parameterized algorithm for learning positive and negative rules.},
}

Tristan Denmat,
Arnaud Gotlieb,
and Mireille Ducassé.
An Abstract Interpretationbased Combinator for Modelling While Loops in Constraint Programming.
In C. Bessière, editor,
Int. Conf. Principles and Practice of Constraint Programming (CP),
LNCS 4741,
September 2007.
SpringerVerlag.
@InProceedings{denmat07,
author={Tristan Denmat and Arnaud Gotlieb and Mireille Ducassé},
title={An Abstract Interpretationbased Combinator for Modelling While Loops in Constraint Programming },
pages={ },
booktitle={Int. Conf. Principles and Practice of Constraint Programming (CP)},
year={2007},
editor={C. Bessière},
series = {LNCS 4741},
publisher={SpringerVerlag},
month={September}
}

Tristan Denmat,
Arnaud Gotlieb,
and Mireille Ducassé.
Improving ConstraintBased Testing with Dynamic Linear Relaxations.
In K. GosevaPopstojanova and P. Runeson, editors,
Int. Symp. Software Reliability Engineering (ISSRE),
November 2007.
IEEE Press.
@InProceedings{denmat07b,
author={Tristan Denmat and Arnaud Gotlieb and Mireille Ducassé},
title={Improving ConstraintBased Testing with Dynamic Linear Relaxations},
pages={ },
booktitle={Int. Symp. Software Reliability Engineering (ISSRE)},
year={2007},
editor={K. GosevaPopstojanova and P. Runeson},
publisher={IEEE Press},
month={November}
}

Pierre Deransart,
Mireille Ducassé,
and Gérard Ferrand.
Une sémantique observationnelle du modèle des boîtes pour la résolution de programmes logiques.
In F. Fages, editor,
Actes de Troisièmes Journées Francophones de Programmation par Contraintes,
2007.
HAL : http://hal.inria.fr/JFPC07.
@InProceedings{deransart07,
author={Pierre Deransart and Mireille Ducassé and Gérard Ferrand},
title={Une sémantique observationnelle du modèle des boîtes pour la résolution de programmes logiques},
booktitle={Actes de Troisièmes Journées Francophones de Programmation par Contraintes},
year={2007},
editor={F. Fages},
publisher={HAL : http://hal.inria.fr/JFPC07}
}

Sébastien Ferré.
CAMELIS: Organizing and Browsing a Personal Photo Collection with a Logical Information System.
In J. Diatta,
P. Eklund,
and M. Liquière, editors,
Int. Conf. Concept Lattices and Their Applications,
volume 331 of CEUR Workshop Proceedings ISSN 16130073,
pages 112123,
2007.
[PDF]
Keyword(s): logical information system,
photo collection,
organization,
information retrieval.
Abstract:
Since the arrival of digital cameras, many people are faced to the challenge of organizing and retrieving the overwhelming flow of photos their life produces. Most people put no metadata on their photos, and we believe this is because existing tools make a very limited use of them. We present a tool, Camelis, that offers users with an organization of photos that is dynamically computed from the metadata, making worthwhile the effort to produce it. Camelis is designed along the lines of Logical Information Systems (LIS), which are founded on logical concept analysis. Hence, (1) an expressive language can be used to describe photos and query the collection, (2) manual and automatic metadata can be smoothly integrated, and (3) expressive querying and flexible navigation can be mixed in a same search and in any order. This presentation is illustrated by experiences on a real collection of more than 5000 photos. 
@InProceedings{Fer2007b,
author = {Sébastien Ferré},
title = {{CAMELIS}: Organizing and Browsing a Personal Photo Collection with a Logical Information System},
booktitle = {Int. Conf. Concept Lattices and Their Applications},
year = {2007},
pages = {112123},
editor = {J. Diatta and P. Eklund and M. Liquière},
series = {CEUR Workshop Proceedings ISSN 16130073},
volume = {331},
pdf = {http://www.irisa.fr/LIS/ferre/papers/cla2007.pdf},
keywords = {logical information system, photo collection, organization, information retrieval},
abstract = {Since the arrival of digital cameras, many people are faced to the challenge of organizing and retrieving the overwhelming flow of photos their life produces. Most people put no metadata on their photos, and we believe this is because existing tools make a very limited use of them. We present a tool, Camelis, that offers users with an organization of photos that is dynamically computed from the metadata, making worthwhile the effort to produce it. Camelis is designed along the lines of Logical Information Systems (LIS), which are founded on logical concept analysis. Hence, (1) an expressive language can be used to describe photos and query the collection, (2) manual and automatic metadata can be smoothly integrated, and (3) expressive querying and flexible navigation can be mixed in a same search and in any order. This presentation is illustrated by experiences on a real collection of more than 5000 photos.},
}

Sébastien Ferré.
The Efficient Computation of Complete and Concise Substring Scales with Suffix Trees.
In S.O. Kuznetsov and S. Schmidt, editors,
Int. Conf. Formal Concept Analysis,
LNAI 4390,
pages 98113,
2007.
Springer.
Keyword(s): concept analysis,
logic,
string,
suffix tree.
Abstract:
Strings are an important part of most real application multivalued contexts. Their conceptual treatment requires the definition of {\em substring scales}, i.e., sets of relevant substrings, so as to form informative concepts. However these scales are either defined by hand, or derived in a contextunaware manner (e.g., all words occuring in string values). We present an efficient algorithm based on suffix trees that produces complete and concise substring scales. Completeness ensures that every possible concept is formed, like when considering the scale of all substrings. Conciseness ensures the number of scale attributes (substrings) is less than the cumulated size of all string values. This algorithm is integrated in Camelis, and illustrated on the set of all ICCS paper titles. 
@InProceedings{Fer2007a,
author = {Sébastien Ferré},
title = {The Efficient Computation of Complete and Concise Substring Scales with Suffix Trees},
booktitle = {Int. Conf. Formal Concept Analysis},
year = {2007},
editor = {S.O. Kuznetsov and S. Schmidt},
series = {LNAI 4390},
pages = {98113},
publisher = {Springer},
keywords = {concept analysis, logic, string, suffix tree},
abstract = {Strings are an important part of most real application multivalued contexts. Their conceptual treatment requires the definition of {\em substring scales}, i.e., sets of relevant substrings, so as to form informative concepts. However these scales are either defined by hand, or derived in a contextunaware manner (e.g., all words occuring in string values). We present an efficient algorithm based on suffix trees that produces complete and concise substring scales. Completeness ensures that every possible concept is formed, like when considering the scale of all substrings. Conciseness ensures the number of scale attributes (substrings) is less than the cumulated size of all string values. This algorithm is integrated in Camelis, and illustrated on the set of all ICCS paper titles.},
}

Sébastien Ferré and Olivier Ridoux.
Logical Information Systems: from Taxonomies to Logics.
In DEXA Work. Dynamic Taxonomies and Faceted Search (FIND),
pages 212216,
2007.
IEEE Computer Society.
[PDF]
Keyword(s): logical information system,
taxonomy,
logic.
Abstract:
Dynamic taxonomies have been proposed as a solution for combining querying and navigation, offering both expressivity and interactivity. Navigation is based on the filtering of a multidimensional taxonomy w.r.t. query answers, which helps users to focus their search. We show that properties that are commonly used only in queries can be integrated in taxonomies, and hence in navigation, by the use of socalled logics. Handdesigned taxonomies and concrete domains (e.g., dates, strings) can be combined so as to form complex taxonomies. For instance, valued attributes can be handled, and different roles between documents and locations can be distinguished. Logical Information Systems (LIS) are characterized by the combination of querying and navigation, and the systematic use of logics. 
@InProceedings{FerRid2007,
author = {Sébastien Ferré and Olivier Ridoux},
title = {Logical Information Systems: from Taxonomies to Logics},
booktitle = {{DEXA} Work. Dynamic Taxonomies and Faceted Search ({FIND})},
year = {2007},
pages = {212216},
publisher = {IEEE Computer Society},
pdf = {http://www.irisa.fr/LIS/ferre/papers/find2007.pdf},
keywords = {logical information system, taxonomy, logic},
abstract = {Dynamic taxonomies have been proposed as a solution for combining querying and navigation, offering both expressivity and interactivity. Navigation is based on the filtering of a multidimensional taxonomy w.r.t. query answers, which helps users to focus their search. We show that properties that are commonly used only in queries can be integrated in taxonomies, and hence in navigation, by the use of socalled logics. Handdesigned taxonomies and concrete domains (e.g., dates, strings) can be combined so as to form complex taxonomies. For instance, valued attributes can be handled, and different roles between documents and locations can be distinguished. Logical Information Systems (LIS) are characterized by the combination of querying and navigation, and the systematic use of logics.},
}

A. Foret.
Pregroup Calculus as a Logical Functor.
In Proceedings of WOLLIC 2007,
volume LNCS 4576,
2007.
Springer.
[PDF]
Keyword(s): Pregroups,
Lambek Categorial Grammars,
Logic Functor,
Cut Elimination.
Abstract:
The concept of pregroup was introduced by Lambek for natural language analysis, with a close link to noncommutative linear logic. We reformulate the pregroup calculus so as to extend it by composition with other logics and calculii.The cut elimination property and the decidabilityproperty of the sequent calculus proposed in the article are shown.Properties of composed calculii are also discussed. 
@InProceedings{Foret07c,
author = {A. Foret},
title = {Pregroup Calculus as a Logical Functor},
booktitle = {Proceedings of WOLLIC 2007},
year = {2007},
volume = {LNCS 4576},
publisher = {Springer},
note = {},
pdf = {http://www.springerlink.com/content/f51674814541n234/?p=8a0b76b8ad414c23a5d75a11ec27652f&pi=20 },
keywords = {Pregroups, Lambek Categorial Grammars, Logic Functor, Cut Elimination},
abstract = { The concept of pregroup was introduced by Lambek for natural language analysis, with a close link to noncommutative linear logic. We reformulate the pregroup calculus so as to extend it by composition with other logics and calculii.The cut elimination property and the decidabilityproperty of the sequent calculus proposed in the article are shown.Properties of composed calculii are also discussed. },
}