Cover ERCIM News 55

This issue in pdf
(48 pages; 10,6 Mb)



Archive:
Cover ERCIM News 54
previous issue
Number 54
July 2003:
Special theme:
Applications and Service Platforms for the Mobile User

all previous issues


Next issue:
January 2004

Next Special theme:
Industrial Diagnosis, Planning and Simulation


About ERCIM News


Valid HTML 4.01!

spacer
 
< Contents ERCIM News No. 55, October 2003
R&D AND TECHNOLOGY TRANSFER
 


CAPS Entreprise: New Technologies for Embedded Code Tuning

by Ronan Amicel and François Bodin


Designing an embedded system is searching for a trade-off between hardware and software. The fast evolution pace in hardware technology, combined with the increasing size of software, drives the need for new software tools. These tools must be able to speed up the development of new systems, while both decreasing the costs and increasing software portability and safety.

While microcontrollers and small DSP chips are still heavily used, more and more devices are now integrating a large processing power. This increased computational power comes at a price, which is more hardware complexity, as processors instruction-level parallelism and cache memories.

Instruction-level parallelism is used at many levels. First, the processor's functional units are pipelined and instruction-sets are extended with media instructions (eg Motorola's AltiVec for PowerPC) that rely on SIMD (Single Instruction Multiple Data) computations. To achieve still greater performance, VLIW (Very Long Instruction Word) processors are used (such as Philips TriMedia, Equator MAP-CA or Texas C6X), that can issue many instructions at each clock cycle. Contrary to the usual behavior of DSP code, faster code usually means increased code size. Furthermore, to reduce the gap between CPU and memory speeds cache memories are now used in embedded systems, introducing unpredictability in a program's performance, which may be very difficult to handle.

As a direct consequence of this hardware complexity, application developers are facing increasingly complex issues in the code optimization process. Hardware-dependent code optimizations are crucial to fully exploit the target processors. Small changes in the source code can result in dramatic performance changes (either positive or negative). In this context, compilers have a major role to play. First, compilers for embedded systems must now integrate code optimization techniques previously used only for supercomputers (eg vectorization techniques to exploit media instructions). However, unlike in the high performance computing context, embedded systems designers are looking for a trade-off between code size and performance. This constraint alone forbids a simple technology transfer from the high performance domain to the embedded systems world. Tools and compilers must be designed to give the programmer a better control on code quality and on memory usage. Furthermore, they need to bridge the gap between compiler-generated code and assembly code specially crafted by an expert.

In the past ten years, research in the CAPS project at IRISA (a common research lab between INRIA, CNRS, INSA and University of Rennes) has been focused on how to help programmers achieve the best performance on new processor architectures (see http://www.irisa.fr/caps/). To this end, the group developed innovative frameworks allowing to build automatic or interactive optimization tools or fast instruction-set simulators. Furthermore, the research group studied advanced compilation techniques able to deal at the application level with the code size versus execution speed trade-off.

A first environment worth mentioning is an interactive tool for code optimization that relies on artificial intelligence techniques (such as case-based reasoning) to suggest appropriate transformations to the programmer. The suggestions are based on a fine-grained analysis of the structure of program fragments and on their similarity with the situations in a knowledge base.

Another technology helps developers leverage custom instruction-set extensions, such as MMX or other media-oriented extensions, available in their target platform. By combining loop vectorization with a configurable pattern recognition engine, this tool can automatically transform C programs to take advantage of media-processing instructions.

The CAPS group also developed a fast and flexible compiled instruction-set simulation system that significantly reduces the time needed for simulator generation, therefore solving the major limitation of this technology.

CAPS Entreprise was founded by members of the CAPS research group to bring innovative software tools, solutions and services to the market of high performance embedded systems. The company aims at becoming a reliable partner for system builders, platform designers and developers seeking the best system performance, by helping them match their software to the specifics of the underlying hardware platform.

CAPS Entreprise offers standalone tools that are specialized for a given task (code transformation, simulation, worst-case execution time analysis, etc). These tools can act as building blocks in a software tool chain and are designed for seamless integration into common development environments. The company proposes global compilation solutions, tailored to the customer's needs. After a detailed study of the requirements and of the existing process, specific additions and enhancements to the previous code generation infrastructure are proposed and implemented. CAPS Entreprise finally offers custom consulting services, such as performance analyses or instruction-set evaluations. Through these services, customers benefit from the company's in-house expertise and tools, helping them make strategic decisions on complex technology issues.

Links:
http://www.caps-entreprise.com/

Please contact:
contact@caps-entreprise.com

 

spacer