



1011
octobre 2005


Ce colloquium est organisé à l'occasion de la remise du titre de Docteur honoris causa de l'Université de Rennes 1 aux professeurs David Harel (The Weizmann Institute of Science, Rehovot, Israël) Alan Willsky (MIT, Cambridge, USA). 

Lundi 10 octobre 2005  Pour plus de détail, cliquer sur les slots cidessous :  
amphithéâtre P  IFSIC  campus Beaulieu  Rennes  
9h30  10h30 : Roger Brockett,  14h00  15h : Steve Marcus,  
10h45  11h45 : Grzegorz Rozenberg,  15h00  16h : Moshe Vardi,  
12h00  12h30 : Rémi Gribonval,  16h30  17h : André Seznec,  
16h30  17h : Christine Guillemot,  
Mardi 11 octobre 2005  
 amphithéâtre Louis Antoine  campus Beaulieu  Rennes  
9h30  10h45 : Alan Willsky  14h00  15h30 : table ronde avec Alan Willsky et David Harel  
11h00  12h15 : David Harel,  16h00 : point presse  
16h30 : cérémonie honoris causa (Présidence de l'Université de Rennes1) 

Lundi 10 octobre 2005  COLLOQUIUM  amphithéâtre P  IFSIC  campus Beaulieu  Rennes  
9h30  10h30 : Roger Brockett,  
Wang professor of computer science and electrical engineering, Harvard University, Cambridge, USA.  
Linear System Identification in the
Nuclear Magnetic Resonance Setting 

10h45  11h45 : Grzegorz Rozenberg,  
Professor at the Department of computer science of Leiden
University, The Netherlandsa, and adjoint professor at the Department of computer science of University of Colorado at Boulder, USA. 

Gene Assembly in Ciciliates  a splending example of natural
computing Natural Computing is a dynamic and fast growing research area concerned with computing taking place in nature and with humandesigned computing inspired by nature. A popular and important topic in the former line of research is computation in living cells, and a good example of this research is the investigation of gene assembly in ciliates. Ciliates, a very ancient group of unicellular organisms, have evolved extraordinary ways of organizing, manipulating and replicating the DNA in their micronuclear genomes. The way that ciliates transform genes from their micronuclear (storage) form into their macronuclear (expression) form is the most involved DNA processing in nature that is known to us. This gene assembly process is also very interesting from the computational point of view. Current research concerning gene assembly is an example of flourishing interdisciplinary research involving computer scientists and molecular biologists. The results obtained thus far are interesting and beneficial for both molecular biology and computer science. One of the central research issues is the quest for discovery of "biomolecular hardware" (bioware) needed for the implementation (in vivo) of molecular operations accomplishing gene assembly. This quest will be the topic of our lecture. We will discuss the biology of gene assembly and demonstrate a solution to the bioware problem  it is based on the templateguided recombination. This recombination is very different from the standard homologous recombination of DNA strands, but it has the properties that are necessary to ensure the DNA acrobatics required for the implementation of the gene assembly process. If time permits, we will also present an outline of a mathematical theory based on the templateguided recombination. Surprisingly enough this theory turns out to be quite fundamental for theoretical computer science. The lecture is of a tutorial character and selfcontained. In particular, no knowledge of basic molecular biology is required. 

12h  12h30 : Rémi Gribonval,  
Chargé de recherche at INRIA/IRISA, Rennes, France.  
Can wavelets help computers listen and focus their attention
? An introduction to source separation with sparse decompositions. In the last decade, wavelets have lead to efficient image compression algorithms now standardized in JPEG2000. In this talk we will discuss how wavelets and their generalizations  so called sparse atomic decompositions  can help solve challenging problems in 'computer audition': the spatial localization of several musical instruments from a stereophonic recording, and the separate extraction of the sound of each instrument for focused listening. 

 Déjeuner   
14h  15h : Steve Marcus,  
Professor, Department of ECE, University of Maryland at College Park, USA.  
A Model Reference Adaptive Search Algorithm for Global Optimization Global optimization problems arise in a wide range of applications, from combinatorial optimization problems such as the traveling salesman problem, to continuous and stochastic problems that arise in manufacturing and telecommunications, such as inventory control and buffer allocation. We present a randomized algorithm called Model Reference Adaptive Search (MRAS) for solving both continuous and combinatorial global optimization problems. At each iteration, the algorithm generates a group of candidate solutions according to a parameterized probabilistic model. These candidate solutions are then used to update the parameters associated with the probabilistic model in such a way that the future search will be biased toward the region containing high quality solutions. The parameter updating procedure in MRAS is guided by a sequence of implicit reference models that will eventually converge to a model producing only the optimal solutions. We establish global convergence of MRAS in both continuous and combinatorial domains. Numerical studies are also presented to demonstrate the effectiveness of the algorithm. 

15h  16h : Moshe Vardi,  
Karen Ostrum George professor in computational engineering, Rice University, Houston, USA.  
A Call to Regularity In three classical papers published between 1979 and 1982, Harel, together with Chandra, laid the foundations for the theory of relational queries. In particular, these papers highlighted the centrality of fixpoint queries defined by means of functionfree Horn clauses, which became known as Datalog queries. The study of Datalog was a major theme in databasetheory research from the mid 1980s to the mid 1990s. Unfortunately, most decision problems involving Datalog turned out to be undecidable. Furthermore, wellbehaved fragments of Datalog turned out to be rather restrictive and unnatural. Surprisingly, a natural and quite general fragment of Datalog did emerge in the mid 1990s, in the context of semistructured data. This fragment is the class of regular queries, whose basic element is that of regular path query. In this talk I will describe the class of regular queries and its wellbehavedness. 

16h30  17h : André Seznec,  
Directeur de recherche at INRIA/IRISA, Rennes, France.  
Thread level parallelism: it
may be time now! For the last 50 years, the computer architects have been able to exploit the advances of integration technology to design more and more powerful uniprocessors. Memory hierarchy, pipelined architectures, instruction level parallelism and speculative execution are the techniques that have allowed the uniprocessor industry to double the processor performance every two years (the famous Moore's law). For a long time, due to longer design cycles and the lack of efficient software, multiprocessors were not very effective. However, for the last few years, increasing the processor complexity (e.g. using deeper pipelines or more instruction level parallelism) has poor performance return. Therefore, processor manufacturers have begun to implement thread level parallelism on a single chip, often mixing simultaneous multithreading and multicores. In this presentation, we recall the concepts that have been driving the uniprocessor architecture advances for the past 30 years. Then we describe the thread parallelism implementations that are appearing onchip. Finally, as parallel software is frequently the missing piece, we suggest some research directions to leverage parallel hardware on sequential software. 

17h  17h30 : Christine Guillemot,  
Directeur de recherche at INRIA/IRISA, Rennes, France.  
Distributed source coding: A
new paradigm for wireless video ? It is wellknown that the minimum achievable rate for lossless compression of two statistically dependent memoryless sources is given by the joint entropy of the two sources. To approach this rate bound, practical systems, e.g., video compression systems, encode and decode the two sources jointly. However, Slepian and Wolf have established in the 70’s that this lossless compression rate bound could also be approached with a vanishing error probability for long sequences, when encoding the two sources separately and decoding them jointly. This theorem has led to a new coding paradigm known as distributed source coding. The lossy equivalent of the SlepianWolf theorem has been formulated later by Wyner and Ziv. Although these informationtheoretic results have been known for the past 30 years, it is only recently that practical solutions have been explored for applications ranging from video compression, resilient video transmission, to minimization of transmit energy in sensor networks. In this talk, we will focus on the problem of distributed compression highlighting the potential for video compression and wireless uplink transmission. This talk will first revisit the main principles of coding/decoding with side information, of distributed source coding, and will present some constructive codes to approach the Slepian wolf and WynerZiv bounds. The talk will then illustrate the application of the distributed coding paradigm to video compression, leading to a shift of complexity from encoder to decoder, desirable for a class of transmission scenarios in wireless video and surveillance systems, as well as to some inherent resilience to channel impairments. 

Mardi 11 octobre 2005  JOURNEE HONORIS CAUSA  amphithéâtre Louis Antoine  campus Beaulieu  Rennes  
9h30  10h45 : Alan Willsky  
Edwin S. Webster professor of electrical engineering, MIT, Cambridge, USA.  

Graphical
Models, Distributed Fusion, and Sensor Networks In this talk we provide a picture of one group’s journey through a set of related research topics and lines of inquiry. The precursors to this journey begin with some early work on Markov random fields (MRF) and image processing and, more particularly, with the search for models and model classes that have structure that yields both insight and computationally powerful algorithms. The real point of departure for this talk, however, begins with the author’s joint work (beginning in 1988) with Albert Benveniste, Michelle Basseville, and Ramine Nikoukhah of IRISA and INRIA on multiresolution models defined on trees. We provide a brief overview of the nature of the results from that collaboration and from its continuation after the author’s return from his 1988 sabbatical in Rennes (including some work with Khalid Daoudi of INRIA that finally realized part of the original research agenda posed at the state of 1988). We’ll also briefly talk about the many applications these results have found, and then will turn to work that we’ve pursued fueled by the limitations of models defined on trees rather than on more general graphs. Markov models defined on graphs with loops is a very rich and active field, finding applications in a surprisingly wide array of disciplines, and challenging theoreticians and algorithm developers to devise methods that are both computationally tractable and highperformance. We provide a picture of some of our contributions in this area, all of which build (in one way or another) on our work on models defined on trees but that also make explicit contact with the rich class of socalled “messagepassing” algorithms (such as the celebrated Belief Propagation” (BP) algorithm) for graphical models. Among the contributions we will mention are socalled “embedded tree” algorithms for the efficient and exact solution of a nontrivial class of Gaussian MRFs; “treereparameterization” algorithms which lead to a number of theoretical results on graphical models; a new recursive cavity modeling (RCM) algorithm that blends treebased estimation with ideas in information geometry to lead to algorithms that allow scalable solution of very large estimation problems; the concept of “walksums” for graphical models and the new theoretical results they admit for belief propagation; and an approach we call “Nonparametric Belief Propagation” that involves a nontrivial extension of the idea of particle filtering to messagepassing algorithms. We
also describe our growing investigation of distributed fusion algorithms
for sensor networks, in which there is a natural graph associated with
network connectivity, as well as possibly two other graphs: one, relating
the variables that are sensed and those that are to be estimated and a
second relating the sources of information to the desired “sinks”
(i.e., to nodes with responsibility for certain actions). We are still
early in this investigation, but we describe several results including
some on what we call “messagecensoring” in which a sensor decides
not to send a BP message, in which empirical studies motivated a theoretical
investigation into the propagation of messaging errors in BP, a study
that has also produced the asyet tightest results for BP convergence.
We also describe our results on efficient communication of messages and
the tradeoff between communication load and performance and on sensor
resource management in which we take into account not just the power cost
of taking a measurement and communicating a message but also of dynamically
“handing off” responsibility for estimation from one node to
another. Further, in some initial work on the rapprochement of messagepassing
algorithms and decentralized detection, we describe the fact that an important
component of sensor network activity is “selforganization”
and describe, for a simple scenario, how the solution to a teamdecision
problem can (a) be solved via a messagepassing algorithm; and (b) leads
to what can be thought of as a network protocol coupling the physical
and application layers. 

11h00  12h15 : David Harel,  
The William Sussman Professorial Chair, Dept. of computer science and applied mathematics, The Weizmann Institute of Science, Rehovot, Israel.  
Computers are Not Omnipotent. In 1984, TIME magazine quoted the chief editor of a certain software publication as saying : "Put the right kind of software into a computer, and it will do whatever you want it to. There may be limits on what you can do with the machines themselves, but there are no limits on what you can do with software." This talk will survey results obtained over the last 75 years by mathematicians, logicians and computer scientists, which disprove this ignorancebased statement in a sweeping and fundamental way. We shall discuss problems that are provably noncomputable, as well as ones that are hopelessly time or memoryconsuming (requiring far more time than has elapsed since the Big Bang, or requiring a computer would not fit into the entire known universe). We will also take a somewhat more amusing look at these facts, and relate them to the (im)possibilities of true artificial intelligence. 

 Déjeuner   
14h00  15h30 : table ronde avec Alan Willsky et David Harel  
16h00 : point presse  
16h30 : cérémonie honoris causa à la Présidence de l'université de Rennes1  