BACK TO INDEX

 Publications of year 2014
 Thesis
1. Sébastien Ferré. Reconciling Expressivity and Usability in Information Access - From Filesystems to the Semantic Web. Habilitation thesis, Matisse, Univ. Rennes 1, 2014. Note: Habilitation à Diriger des Recherches (HDR), defended on November 6th. Keyword(s): expressivity, usability, information access, information retrieval, query language, navigation structure, interactive view, abstract conceptual navigation, file system, semantic web.
Abstract:

@PhdThesis{Fer2014hdr,
type = {Habilitation thesis},
author = {Sébastien Ferré},
title = {Reconciling Expressivity and Usability in Information Access - From Filesystems to the Semantic Web},
school = {Matisse, Univ. Rennes 1},
year = {2014},
note = {Habilitation à Diriger des Recherches (HDR), defended on November 6th},
keywords = {expressivity, usability, information access, information retrieval, query language, navigation structure, interactive view, abstract conceptual navigation, file system, semantic web},
abstract = {In many domains where information access plays a central role, there is a gap between expert users who can ask complex questions through formal query languages (e.g., SQL), and lay users who either are dependent on expert users, or must restrict themselves to ask simpler questions (e.g., keyword search). Because of the formal nature of those languages, there seems to be an unescapable trade-off between expressivity and usability in information systems. The objective of this thesis is to present a number of results and perspectives that show that the expressivity of formal languages can be reconciled with the usability of widespread information systems (e.g., browsing, Faceted Search (FS)). The final aim of this work is to empower people with the capability to produce, explore, and analyze their data in a powerful way.

}


 Articles in journal or book chapters
1. Nicolas Béchet, Peggy Cellier, Thierry Charnois, and Bruno Crémilleux. Fouille de motifs séquentiels pour la découverte de relations entre gènes et maladies rares. Revue d'Intelligence Artificielle, 28(2-3):245-270, 2014. Keyword(s): data mining, sequential patterns, information extraction, linguistic patterns, rare diseases.
Abstract:
 Orphanet est un organisme dont l'objectif est notamment de rassembler des collections d'articles traitant de maladies rares. Cependant, l'acquisition de nouvelles connaissances dans ce domaine est actuellement réalisée manuellement. Dès lors, obtenir de nouvelles informations relatives aux maladies rares est un processus chronophage. Permettre d'obtenir ces informations de manière automatique est donc un enjeu important. Dans ce contexte, nous proposons d'aborder la question de l'extraction de relations entre gènes et maladies rares en utilisant des approches de fouille de données, plus particulièrement de fouille de motifs séquentiels sous contraintes. Nos expérimentations montrent l'intérêt de notre approche pour l'extraction de relations entre gènes et maladies rares à partir de résumés d'articles de PubMed.

@article{BCCC14,
author = {Nicolas Béchet and Peggy Cellier and Thierry Charnois and Bruno Crémilleux},
title = {Fouille de motifs séquentiels pour la découverte de relations entre gènes et maladies rares},
journal = {Revue d'Intelligence Artificielle},
volume = {28},
number = {2-3},
pages = {245--270},
year = {2014},
keywords = {data mining, sequential patterns, information extraction, linguistic patterns, rare diseases},
abstract = {Orphanet est un organisme dont l'objectif est notamment de rassembler des collections d'articles traitant de maladies rares. Cependant, l'acquisition de nouvelles connaissances dans ce domaine est actuellement réalisée manuellement. Dès lors, obtenir de nouvelles informations relatives aux maladies rares est un processus chronophage. Permettre d'obtenir ces informations de manière automatique est donc un enjeu important. Dans ce contexte, nous proposons d'aborder la question de l'extraction de relations entre gènes et maladies rares en utilisant des approches de fouille de données, plus particulièrement de fouille de motifs séquentiels sous contraintes. Nos expérimentations montrent l'intérêt de notre approche pour l'extraction de relations entre gènes et maladies rares à partir de résumés d'articles de PubMed.},

}


2. Mireille Ducassé and Peggy Cellier. Fair and Fast Convergence on Islands of Agreement in Multicriteria Group Decision Making by Logical Navigation. Group Decision and Negotiation, 23(4):673-694, July 2014. [WWW] [doi:10.1007/s10726-013-9372-4] Keyword(s): Multicriteria Decision, Multicriteria Sorting, Consensus Reaching, Group Decision Support System, ThinkLets, Logical Information Systems, Formal Concept Analysis.
Abstract:
 Reasoning on multiple criteria is a key issue in group decision to take into account the multidimensional nature of real-world decision-making problems. In order to reduce the induced information overload, in multicriteria decision analysis, criteria are in general aggregated, in many cases by a simple discriminant function of the form of a weighted sum. It requires to, a priori and completely, elicit preferences of decision makers. That can be quite arbitrary. In everyday life, to reduce information overload people often use a heuristic, called Take-the-best'': they take criteria in a predefined order, the first criterion which discriminates the alternatives at stake is used to make the decision. Although useful, the heuristic can be biased. This article proposes the Logical Multicriteria Sort process to support multicriteria sorting within islands of agreement. It therefore does not require a complete and consistent a priori set of preferences, but rather supports groups to quickly identify the criteria for which an agreement exists. The process can be seen as a generalization of Take-the-best. It also proposes to consider one criterion at a time but once a criterion has been found discriminating it is recorded, the process is iterated and relevant criteria are logically combined. Hence, the biases of Take-the-best are reduced. The process is supported by a GDSS, based on Logical Information Systems, which gives instantaneous feedbacks of each small decision and keeps tracks of all of the decisions taken so far. The process is incremental, each step involves low information load. It guarantees some fairness because all considered alternatives are systematically analyzed along the selected criteria. A successful case study is reported.

@Article{ducasse2013b,
Author={Mireille Ducass\'e and Peggy Cellier},
Title={Fair and Fast Convergence on Islands of Agreement in Multicriteria Group Decision Making by Logical Navigation},
Pages={673-694},
doi={10.1007/s10726-013-9372-4},
Journal={Group Decision and Negotiation},
publisher={Springer Netherlands},
Year={2014},
Volume={23},
Number={4},
Month={July},
url={http://dx.doi.org/10.1007/s10726-013-9372-4},
issn={0926-2644},
Note={ },
Keywords={Multicriteria Decision, Multicriteria Sorting, Consensus Reaching, Group Decision Support System, ThinkLets, Logical Information Systems, Formal Concept Analysis},
Abstract={ Reasoning on multiple criteria is a key issue in group decision to take into account the multidimensional nature of real-world decision-making problems. In order to reduce the induced information overload, in multicriteria decision analysis, criteria are in general aggregated, in many cases by a simple discriminant function of the form of a weighted sum. It requires to, a priori and completely, elicit preferences of decision makers. That can be quite arbitrary. In everyday life, to reduce information overload people often use a heuristic, called Take-the-best'': they take criteria in a predefined order, the first criterion which discriminates the alternatives at stake is used to make the decision. Although useful, the heuristic can be biased. This article proposes the Logical Multicriteria Sort process to support multicriteria sorting within islands of agreement. It therefore does not require a complete and consistent a priori set of preferences, but rather supports groups to quickly identify the criteria for which an agreement exists. The process can be seen as a generalization of Take-the-best. It also proposes to consider one criterion at a time but once a criterion has been found discriminating it is recorded, the process is iterated and relevant criteria are logically combined. Hence, the biases of Take-the-best are reduced. The process is supported by a GDSS, based on Logical Information Systems, which gives instantaneous feedbacks of each small decision and keeps tracks of all of the decisions taken so far. The process is incremental, each step involves low information load. It guarantees some fairness because all considered alternatives are systematically analyzed along the selected criteria. A successful case study is reported. }
}


3. Sébastien Ferré. SQUALL: The expressiveness of SPARQL 1.1 made available as a controlled natural language. Data & Knowledge Engineering, 94:163-188, 2014. [WWW] [doi:10.1016/j.datak.2014.07.010] Keyword(s): controlled natural language, semantic web, RDF, SPARQL, expressiveness.
Abstract:
 The Semantic Web (SW) is now made of billions of triples, which are available as Linked Open Data (LOD) or as RDF stores. The SPARQL query language provides a very expressive way to search and explore this wealth of semantic data. However, user-friendly interfaces are needed to bridge the gap between end-users and SW formalisms. Navigation-based interfaces and natural language interfaces require no or little training, but they cover a small fragment of SPARQL's expressivity. We propose SQUALL, a query and update language that provides the full expressiveness of SPARQL~1.1 through a flexible controlled natural language (e.g., solution modifiers through superlatives, relational algebra through coordinations, filters through comparatives). A comprehensive and modular definition is given as a Montague grammar, and an evaluation of naturalness is done on the QALD challenge. SQUALL is conceived as a component of natural language interfaces, to be combined with lexicons, guided input, and contextual disambiguation. It is available as a Web service that translates SQUALL sentences to SPARQL, and submits them to SPARQL endpoints (e.g., DBpedia), therefore ensuring SW compliance, and leveraging the efficiency of SPARQL engines.

@Article{Fer2014dke,
author = {Sébastien Ferré},
title = {{SQUALL}: The expressiveness of {SPARQL} 1.1 made available as a controlled natural language},
journal = {Data \& Knowledge Engineering},
volume = {94},
pages = {163-188},
year = {2014},
doi = {10.1016/j.datak.2014.07.010},
url = {http://authors.elsevier.com/sd/article/S0169023X1400069X},
keywords = {controlled natural language, semantic web, RDF, SPARQL, expressiveness},
abstract = {The Semantic Web (SW) is now made of billions of triples, which are available as Linked Open Data (LOD) or as RDF stores. The SPARQL query language provides a very expressive way to search and explore this wealth of semantic data. However, user-friendly interfaces are needed to bridge the gap between end-users and SW formalisms. Navigation-based interfaces and natural language interfaces require no or little training, but they cover a small fragment of SPARQL's expressivity. We propose SQUALL, a query and update language that provides the full expressiveness of SPARQL~1.1 through a flexible controlled natural language (e.g., solution modifiers through superlatives, relational algebra through coordinations, filters through comparatives). A comprehensive and modular definition is given as a Montague grammar, and an evaluation of naturalness is done on the QALD challenge. SQUALL is conceived as a component of natural language interfaces, to be combined with lexicons, guided input, and contextual disambiguation. It is available as a Web service that translates SQUALL sentences to SPARQL, and submits them to SPARQL endpoints (e.g., DBpedia), therefore ensuring SW compliance, and leveraging the efficiency of SPARQL engines.},

}


 Conference articles
1. Mouhamadou Ba, Sébastien Ferré, and Mireille Ducassé. Convertibility between input and output types to help compose services in bioinformatics. In Colloque africain sur la recherche en informatique et mathématiques appliquées (CARI), pages 141-148, 2014.
@inproceedings{ba2014cari,
author = {Mouhamadou Ba and Sébastien Ferré and Mireille Ducassé},
title = {Convertibility between input and output types to help compose services in bioinformatics},
booktitle={Colloque africain sur la recherche en informatique et mathématiques appliquées ({CARI})},
pages = {141--148},
year={2014},

}


2. Mouhamadou Ba, Sébastien Ferré, and Mireille Ducassé. Convertibilité entre types d'entrée et de sortie pour la composition de services en bio-informatique. In Conf. Reconnaissance de Formes et Intelligence Artificielle (RFIA), 2014.
@inproceedings{ba2014rfia,
title={Convertibilité entre types d'entrée et de sortie pour la composition de services en bio-informatique},
author={Mouhamadou Ba and Sébastien Ferré and Mireille Ducassé},
booktitle={Conf. Reconnaissance de Formes et Intelligence Artificielle ({RFIA})},
year={2014},

}


3. Mouhamadou Ba, Sébastien Ferré, and Mireille Ducassé. Generating Data Converters to Help Compose Services in Bioinformatics Workflows. In Hendrick Decker et al., editor, Int. Conf. Database and Expert Systems Applications (DEXA), LNCS 8644, pages 284-298, 2014. Springer. Keyword(s): workflow, bioinformatic, data converter, convertibility, rule system.
Abstract:
 Heterogeneity of data and data formats in bioinformatics often entail a mismatch between inputs and outputs of different services, making it difficult to compose them into workflows. To reduce those mismatches bioinformatics platforms propose ad'hoc converters written by hand. This article proposes to systematically detect convertibility from output types to input types. Convertibility detection relies on abstract types, close to XML Schema, allowing to abstract data while precisely accounting for its composite structure. Detection is accompanied by an automatic generation of converters between input and output XML data. Our experiment on bioinformatics services and datatypes, performed with an implementation of our approach, shows that the detected convertibilities and produced converters are relevant from a biological point of view. Furthermore they automatically produce a graph of potentially compatible services with a connectivity higher than with the ad'hoc approaches.

@inproceedings{ba2014dexa,
author = {Mouhamadou Ba and Sébastien Ferré and Mireille Ducassé},
title = {Generating Data Converters to Help Compose Services in Bioinformatics Workflows},
booktitle = {Int. Conf. Database and Expert Systems Applications ({DEXA})},
editor = {Hendrick Decker {et al.}},
series = {LNCS 8644},
pages = {284--298},
year = {2014},
publisher = {Springer},
keywords = {workflow, bioinformatic, data converter, convertibility, rule system},
abstract = {Heterogeneity of data and data formats in bioinformatics often entail a mismatch between inputs and outputs of different services, making it difficult to compose them into workflows. To reduce those mismatches bioinformatics platforms propose ad'hoc converters written by hand. This article proposes to systematically detect convertibility from output types to input types. Convertibility detection relies on abstract types, close to XML Schema, allowing to abstract data while precisely accounting for its composite structure. Detection is accompanied by an automatic generation of converters between input and output XML data. Our experiment on bioinformatics services and datatypes, performed with an implementation of our approach, shows that the detected convertibilities and produced converters are relevant from a biological point of view. Furthermore they automatically produce a graph of potentially compatible services with a connectivity higher than with the ad'hoc approaches.},

}


4. Soda Cissé, Peggy Cellier, and Olivier Ridoux. Segmentation of Geolocalized Trajectories using Exponential Moving Average. In Colloque Africain sur la Recherche en Informatique et Mathématiques Appliquées (CARI), pages 149-156, 2014. Keyword(s): geolocalized trajectories, segmentation.
Abstract:
 Nowadays, large sets of data describing trajectories of mobile objects are made available by the generalization of geolocalisation sensors. Relevant information, for instance, the most used routes by children to go to school or the most extensively used streets in the morning by workers, can be extracted from this amount of available data allowing, for example, to reconsider the urban space. A trajectory is represented by a set of points (x; y; t) where x and y are the geographic coordinates of a mobile object and t is a date. These data are difficult to explore and interpret in their raw form, i.e. in the form of points (x; y; t), because they are noisy, irregularly sampled and too low level. A first step to make them usable is to resample the data, smooth it, and then to segment it into higher level segments (e.g. "stops" and "moves") that give a better grip for interpretation than the raw coordinates. In this paper, we propose a method for the segmentation of these trajectories in accelerate/decelerate segments which is based on the computation of exponential moving averages (EMA). We have conducted experiments where the exponential moving average proves to be an efficient smoothing function, and the difference between two EMA of different weights proves to discover significant accelerating-decelerating segments.

@inproceedings{CisseCR2014,
author = {Soda Cissé and Peggy Cellier and Olivier Ridoux},
title = {Segmentation of Geolocalized Trajectories using Exponential Moving Average},
booktitle={Colloque Africain sur la Recherche en Informatique et Mathématiques Appliquées (CARI)},
pages = {149--156},
year={2014},
keywords={geolocalized trajectories, segmentation},
abstract={Nowadays, large sets of data describing trajectories of mobile objects are made available by the generalization of geolocalisation sensors. Relevant information, for instance, the most used routes by children to go to school or the most extensively used streets in the morning by workers, can be extracted from this amount of available data allowing, for example, to reconsider the urban space. A trajectory is represented by a set of points (x; y; t) where x and y are the geographic coordinates of a mobile object and t is a date. These data are difficult to explore and interpret in their raw form, i.e. in the form of points (x; y; t), because they are noisy, irregularly sampled and too low level. A first step to make them usable is to resample the data, smooth it, and then to segment it into higher level segments (e.g. "stops" and "moves") that give a better grip for interpretation than the raw coordinates. In this paper, we propose a method for the segmentation of these trajectories in accelerate/decelerate segments which is based on the computation of exponential moving averages (EMA). We have conducted experiments where the exponential moving average proves to be an efficient smoothing function, and the difference between two EMA of different weights proves to discover significant accelerating-decelerating segments.},

}


5. Mireille Ducassé and Peggy Cellier. Using Biddings and Motivations in Multi-unit Assignments. In Pascale Zaraté, Gregory E. Kersten, and Jorge E. Hernandez, editors, Group Decision and Negotiation. A Process-Oriented View, volume 180 of Lecture Notes in Business Information Processing, pages 53-61, 2014. Springer. [WWW] [doi:10.1007/978-3-319-07179-4-6] Keyword(s): group decision support, thinkLet, formal concept analysis, logical information systems, course allocation, multi-unit assignment.
Abstract:
 In this paper, we propose a process for small to medium scale multi-assignment problems. In addition to biddings, agents can give motivations to explain their choices in order to help decision makers break ties in a founded way. A group decision support system, based on Logical Information Systems, allows decision makers to easily face both biddings and motivations. Furthermore, it guaranties that all the agents are treated equally. A successful case study about a small course assignment problem at a technical university is reported.

@inproceedings{ducasse2014,
title={Using Biddings and Motivations in Multi-unit Assignments},
author={Mireille Ducassé and Peggy Cellier},
year={2014},
isbn={978-3-319-07178-7},
booktitle={Group Decision and Negotiation. A Process-Oriented View},
volume={180},
series={Lecture Notes in Business Information Processing},
editor={Zaraté, Pascale and Kersten, Gregory E. and Hernandez, Jorge E.},
doi={10.1007/978-3-319-07179-4-6},
url={http://dx.doi.org/10.1007/978-3-319-07179-4-6},
publisher={Springer},
keywords={group decision support; thinkLet; formal concept analysis; logical information systems; course allocation; multi-unit assignment},
pages={53-61},
language={English},
abstract={ In this paper, we propose a process for small to medium scale multi-assignment problems. In addition to biddings, agents can give motivations to explain their choices in order to help decision makers break ties in a founded way. A group decision support system, based on Logical Information Systems, allows decision makers to easily face both biddings and motivations. Furthermore, it guaranties that all the agents are treated equally. A successful case study about a small course assignment problem at a technical university is reported.}
}


6. Sébastien Ferré. Expressive and Scalable Query-Based Faceted Search over SPARQL Endpoints. In P. Mika, T. Tudorache, A. Bernstein, C. Welty, C. A. Knoblock, D. Vrandecic, P. T. Groth, N. F. Noy, K. Janowicz, and C. A. Goble, editors, The Semantic Web (ISWC), LNCS 8797, pages 438-453, 2014. Springer. Note: Nominee for the best research paper award. [WWW] Keyword(s): SPARQL endpoints, semantic search, faceted search, user interaction, SPARQL queries, query-based faceted search, expressivity, scalability, portability, usability, Sparklis.
Abstract:
 Linked data is increasingly available through SPARQL endpoints, but exploration and question answering by regular Web users largely remain an open challenge. Users have to choose between the expressivity of formal languages such as SPARQL, and the usability of tools based on navigation and visualization. In a previous work, we have proposed Query-based Faceted Search (QFS) as a way to reconcile the expressivity of formal languages and the usability of faceted search. In this paper, we further reconcile QFS with scalability and portability by building QFS over SPARQL endpoints. We also improve expressivity and readability. Many SPARQL features are now covered: multidimensional queries, union, negation, optional, filters, aggregations, ordering. Queries are now verbalized in English, so that no knowledge of SPARQL is ever necessary. All of this is implemented in a portable Web application, Sparklis, and has been evaluated on many endpoints and questions.

@inproceedings{Fer2014iswc,
author = {Sébastien Ferré},
title = {Expressive and Scalable Query-Based Faceted Search over {SPARQL} Endpoints},
booktitle = {The Semantic Web ({ISWC})},
pages = {438--453},
year = {2014},
url = {http://dx.doi.org/10.1007/978-3-319-11915-1_28},
editor = {P. Mika and T. Tudorache and A. Bernstein and C. Welty and C. A. Knoblock and D. Vrandecic and P. T. Groth and N. F. Noy and K. Janowicz and C. A. Goble},
series = {LNCS 8797},
publisher = {Springer},
note = {Nominee for the best research paper award},
keywords = {SPARQL endpoints, semantic search, faceted search, user interaction, SPARQL queries, query-based faceted search, expressivity, scalability, portability, usability, Sparklis},
abstract = {Linked data is increasingly available through SPARQL endpoints, but exploration and question answering by regular Web users largely remain an open challenge. Users have to choose between the expressivity of formal languages such as SPARQL, and the usability of tools based on navigation and visualization. In a previous work, we have proposed Query-based Faceted Search (QFS) as a way to reconcile the expressivity of formal languages and the usability of faceted search. In this paper, we further reconcile QFS with scalability and portability by building QFS over SPARQL endpoints. We also improve expressivity and readability. Many SPARQL features are now covered: multidimensional queries, union, negation, optional, filters, aggregations, ordering. Queries are now verbalized in English, so that no knowledge of SPARQL is ever necessary. All of this is implemented in a portable Web application, Sparklis, and has been evaluated on many endpoints and questions.},

}


7. Sébastien Ferré. SPARKLIS: a SPARQL Endpoint Explorer for Expressive Question Answering. In M. Horridge, M. Rospocher, and J. van Ossenbruggen, editors, ISWC Posters & Demonstrations Track, volume 1272 of CEUR Workshop Proceedings, pages 45-48, 2014. CEUR-WS.org. [WWW] Keyword(s): demo, SPARQL endpoints, semantic search, faceted search, user interaction, SPARQL queries, query-based faceted search, expressivity, scalability, portability, usability, Sparklis.
Abstract:
 SPARKLIS is a Semantic Web tool that helps users explore SPARQL endpoints by guiding them in the interactive building of questions and answers, from simple ones to complex ones. It combines the fine-grained guidance of faceted search, most of the expressivity of SPARQL, and the readability of (controlled) natural languages. No endpoint-specific configuration is necessary, and no knowledge of SPARQL and the data schema is required from users. This demonstration paper is a companion to the research paper~\cite{Fer2014iswc}.

@inproceedings{Fer2014demo,
author = {Sébastien Ferré},
title = {{SPARKLIS:} a {SPARQL} Endpoint Explorer for Expressive Question Answering},
booktitle = {{ISWC} Posters {\&} Demonstrations Track},
pages = {45--48},
year = {2014},
url = {http://ceur-ws.org/Vol-1272/paper_39.pdf},
editor = {M. Horridge and M. Rospocher and J. van Ossenbruggen},
series = {{CEUR} Workshop Proceedings},
volume = {1272},
publisher = {CEUR-WS.org},
keywords = {demo, SPARQL endpoints, semantic search, faceted search, user interaction, SPARQL queries, query-based faceted search, expressivity, scalability, portability, usability, Sparklis},
abstract = {SPARKLIS is a Semantic Web tool that helps users explore SPARQL endpoints by guiding them in the interactive building of questions and answers, from simple ones to complex ones. It combines the fine-grained guidance of faceted search, most of the expressivity of SPARQL, and the readability of (controlled) natural languages. No endpoint-specific configuration is necessary, and no knowledge of SPARQL and the data schema is required from users. This demonstration paper is a companion to the research paper~\cite{Fer2014iswc}.},

}


8. Annie Foret. On Associative Lambek Calculus Extended with Basic Proper Axioms. In C. Casadio, B. Coecke, M. Moortgat, and P. Scott, editors, Categories and Types in Logic, Language, and Physics - Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday, LNCS 8222, pages 172-187, 2014. Springer. [WWW] Keyword(s): lambek calculus, associativity.
Abstract:
 The purpose of this article is to show that the associative Lambek calculus extended with basic proper axioms can be simulated by the usual associative Lambek calculus, with the same number of types per word in a grammar. An analogue result had been shown earlier for pregroups grammars (2007). We consider Lambek calculus with product, as well as the product-free version.

@inproceedings{Foret2014birthday,
author = {Annie Foret},
title = {On Associative Lambek Calculus Extended with Basic Proper Axioms},
booktitle = {Categories and Types in Logic, Language, and Physics - Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday},
pages = {172--187},
year = {2014},
url = {http://dx.doi.org/10.1007/978-3-642-54789-8_10},
keywords = {lambek calculus, associativity},
abstract = {The purpose of this article is to show that the associative Lambek calculus extended with basic proper axioms can be simulated by the usual associative Lambek calculus, with the same number of types per word in a grammar. An analogue result had been shown earlier for pregroups grammars (2007). We consider Lambek calculus with product, as well as the product-free version.},
editor = {C. Casadio and B. Coecke and M. Moortgat and P. Scott},
series = {LNCS 8222},
publisher = {Springer},

}


9. Annie Foret. On Harmonic CCG and Pregroup Grammars. In N. Asher and S. Soloviev, editors, Int. Conf. Logical Aspects of Computational Linguistics (LACL), LNCS 8535, pages 83-95, 2014. Springer. [WWW] Keyword(s): pregroup grammar, CCG.
Abstract:
 This paper studies mappings between CCG and pregroup grammars, to allow a transfer of linguistic resources from one formalism to the other. We focus on mappings that preserve the binary structures, we also discuss some possible alternatives in the underlying formalisms, with some experiments.

@inproceedings{Foret2014lacl,
author = {Annie Foret},
title = {On Harmonic {CCG} and Pregroup Grammars},
booktitle = {Int. Conf. Logical Aspects of Computational Linguistics ({LACL})},
pages = {83--95},
year = {2014},
url = {http://dx.doi.org/10.1007/978-3-662-43742-1_7},
keywords = {pregroup grammar, CCG},
abstract = {This paper studies mappings between CCG and pregroup grammars, to allow a transfer of linguistic resources from one formalism to the other. We focus on mappings that preserve the binary structures, we also discuss some possible alternatives in the underlying formalisms, with some experiments.},
editor = {N. Asher and S. Soloviev},
series = {LNCS 8535},
publisher = {Springer},

}


10. Solen Quiniou, Peggy Cellier, and Thierry Charnois. Fouille de données pour associer des noms de sessions aux articles scientifiques. In Brigitte Bigi, editor, Défi Fouille de Textes - DEFT 2014 (Atelier TALN), 2014. Laboratoire Parole et Langage. Note: ISBN: 978-2-9518233-6-5. Keyword(s): data mining, sequence mining, graph mining, paper categorisation.
@inproceedings{QCC14,
author = {Solen Quiniou and Peggy Cellier and Thierry Charnois},
title = {Fouille de données pour associer des noms de sessions aux articles scientifiques},
booktitle = {Défi Fouille de Textes - DEFT 2014 (Atelier TALN)},
editor = {Brigitte Bigi},
publisher = {Laboratoire Parole et Langage},
note = {ISBN: 978-2-9518233-6-5},
year = {2014},
keywords = {data mining, sequence mining, graph mining, paper categorisation},
abstaract = {Nous décrivons dans cet article notre participation à l'édition 2014 de DEFT. Nous nous intéressons à la tâche consistant à associer des noms de session aux articles d'une conférence. Pour ce faire, nous proposons une approche originale, symbolique et non supervisée, de découverte de connaissances. L'approche combine des méthodes de fouille de données séquentielles et de fouille de graphes. La fouille de séquences permet d'extraire des motifs fréquents dans le but de construire des descriptions des articles et des sessions. Ces descriptions sont ensuite représentées par un graphe. Une technique de fouille de graphes appliquée sur ce graphe permet d'obtenir des collections de sous-graphes homogènes, correspondant à des collections d'articles et de noms de sessions.},

}


BACK TO INDEX

Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces répertoires sont rendus disponibles par les auteurs qui y ont contribué en vue d'assurer la diffusion à temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gardés par les auteurs et par les détenteurs du copyright, en dépit du fait qu'ils présentent ici leurs travaux sous forme électronique. Les personnes copiant ces informations doivent adhérer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas être rendus disponibles ailleurs sans la permission explicite du détenteur du copyright.