Do µP Compilers Measure Up?

Feb. 3, 2003
Many DSP, 8-, or 16-bit microcontroller developers continue to cling to assembler while their colleagues dealing with 32- or 64-bit processors work exclusively with high-level languages like C, C++, and Java. This is often due to the myth that...

Many DSP, 8-, or 16-bit microcontroller developers continue to cling to assembler while their colleagues dealing with 32- or 64-bit processors work exclusively with high-level languages like C, C++, and Java. This is often due to the myth that compilers cannot generate code that's as compact or as fast as a seasoned assembler programmer. However, compiler improvements and faster development platforms have made C, C++, and EC++ compilers the preferred tools for even experienced programmers. A growing number of evaluation kits now come bundled with C/C++ compilers.

The key to the growth in microcontroller compilers is high-performance PCs used by developers. A 3-GHz processor can crank through code in a flash, even with every optimization feature enabled. As a result, compiler writers can add increasingly sophisticated optimizations to the point where compilers perform better than seasoned assembler professionals. Meanwhile, they still maintain the goals of high-level tools, such as higher productivity and improved maintenance.

SUPER OPTIMIZATION Peter Dibble, distinguished engineer at TimeSys (, points out Henry Massalin's 1987 paper "Superoptimize—A Look at the Smallest Program," in the conference on Architectural Support for Programming Languages and Systems. "A superoptimizer looks for clever ways to solve small problems by searching every possible instruction sequence for the one that generates that result most quickly. It has generated some surprising instruction sequences," notes Dibble. "It gets a lot of mileage out of knowing the entire instruction set. You'll find things like BCD instructions showing up in the middle of a bit operation." The superoptimizer is now distributed as a utility with "gcc."

Assembler wizards are known for this type of technique. They take advantage of idioms to generate compact but obscure code. The unfortunate result is code that's hard to maintain. Few wizards know all such idioms for a particular architecture, whereas the C code that shows the original algorithm is easy for even average users to understand.

David Hoff, product marketing engineer of Arc International's ( Metaware tools group, highlights the importance of having a compiler team that has worked together for a long time because compilers are rarely the product of an individual programmer.

Applying all optimizations doesn't necessarily generate an optimal result, says Gerard Vink, program manager of platform technology at Altium ( "At the outset it is important to note that not applying an optimization to a particular code fragment may improve final code quality," he explains. "For example, common subexpression elimination (CSE) is a very effective optimization implemented in almost every compiler front end. A well implemented CSE algorithm always improves the intermediate code because redundant operations are removed. Since the value of the common subexpression has to be stored somewhere (in a register), CSE may increase register pressure, causing register spillage code that increases size and decreases execution speed. In such cases, the negative effects of register spilling outweigh the positive effect of the eliminated subexpression."

Microcontroller architectures force compilers to do some odd things. Dave Hudson, principal software engineer for Ubicom Ltd. (, notes that the company's C compiler for the IP2000 often needs to do the exact opposite of what many compilers attempt—avoid using registers because the architecture has a very low-cost memory reference overhead.

DSPs AND VECTORIZATION According to Arun Mulpur, DSP product marketing manager at The MathWorks (, one of the biggest challenges that engineers encounter when using fixed-point DSP applications in C is optimizing the dynamic range of fixed-point variables and mathematical operations that can take up to 40% of a system designer's time. Features like bit-true simulation allow developers to deal with this problem while working with high-level models.

Reid Tatge, TI Fellow with Texas Instruments (, points out the difficulty that compilers and assembly programmers face when dealing with DSP architectural features like SIMD instructions, especially when it comes to such details as circular addressing. Hardware often has restrictions on the alignment of the arrays being accessed. If the compiler can't prove the alignment of the arrays, it can't use the addressing mode. A compiler must restructure loops to find enough parallelism to fill all slots.

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!