API / COSI : Codesign of Silicon Systems

Project Head : François CHAROT


The Inria-CNRS project COSI (at Irisa, project refers to a group of people working jointly on a common theme), evolved from the project API, and consists of 4 full time research/faculty members and their PhD students (currently 4), 1 engineer and a small number of visiting short term researchers. We work on methods and tools for implementing complete systems on silicon.

The challenges of embedded system design are (i) an evolving palette of target technologies (full custom VLSI, FPGAs, reconfigurable coprocessors, and hybrid hardware software, often with the software running on special purpose instruction set processors (ASIPs) or on RISC cores), (ii) the importance of fast design (time-to-market), and (iii) the increasing complexity of the systems being designed. Click here for more details and justification.

In order to address these challenges our main thesis is that the design tools and the methods must be based on executable specifications which can be systematically transformed, with the ability to validate each transformation and to retrace the design path. These transformations are (as far as possible, but not always) based on formal methods, and must enable the designer to integrate with other tools that are not.

Our work emphasizes three main themes as described below: very high level synthesis of dedicated hardware and software using formal transformations, compilation and optimization for ASIPs, and tools and design methods for reconfigurable computing. We focus principally but not exclusively on regular computations that can benefit from parallelism.

In parallel, we work on specific applications drawn from signal and image processing, telecom, high performance computing, biological sequence comparison, etc.

Very High Level Synthesis with Alpha

Our work is based on formal methods for transforming high level specifications expressed as recurrence equations. It originated in research on systematic design of systolic arrays which led to the development of what is called the polyhedral model. The class of programs addressed by this methodology has evolved to encompass static control loops and the target architectures may range from dedicated VLSI arrays to general purpose parallel machines (the methods now have close ties to automatic parallelization of such loops). In the context of the polyhedral model we are also interested in declarative programming, verification, and static analysis.

Systolic array design is achieved through formal, correctness preserving transformations applied to programs in Alpha (a functional data parallel language developed in 1989 in the group), using the MMAlpha system (an environment for manipulating Alpha programs). MMAlpha is now available as an operational prototype for non-commercial research purposes. Soon, we plan to release it under the standard gnu license agreement.

Our current work is on extending the class of programs towards irregularity (of data dependences and of iteration spaces) and on partitioning (also called clustering, loop blocking, tiling).

Retargettable Compilation for ASIPs

Significant parts of dedicated systems are now implemented on programmable processors, typically RISC or DSP cores that are optimized and specialized for the particular application. In addition, many control functions are often implemented on other specialized processors. Compilation for such ASIPs is an important challenge, as is the design of such processors, since one seeks to determine the architecture as well as the compiler for a specific set of programs that meet the performance constraints. Our work on this subject builds on our previous experience with MOVIE, a programmable SIMD processor for use in video applications, and is based on an open compilation framework and a special formalism to describe the ASIP architecture.

Reconfigurable Computing

The third axe of our research deals with reconfigurable coprocessors, a promising technology that offers an intermediate niche between custom ASICs with their high performance with a lack of flexibility and ASIPs with their flexibility (programmability) with a lower performance. The challenge is to develop tools that achieve flexibility with performance. Our main hypothesis is that since the raw performance of the technology is poorer than ASICs, we must use parallelism and also the regularity of FPGAs. We therefore first map loops to regular arrays, and then seek to develop technology independent back end optimizers for such arrays.

Applications / Algorithms

biological sequence comparison
image compression and the MOVIE processor
Knapsack problem

Group Members Want to work with us? Our Publications Annual Report Sujets de DEA / Thèse

serveur interne (acces limité aux membres de COSI)

Last modification of this page : November 2001.

For more information: