Multicore chips bring more power to bear on a problem while lowering power consumption—but only if programmers can take advantage of their architecture. Parallelizing compilers for vector-processing single-instruction multiple-data (SIMD) architectures is well understood. Yet symmetric multiprocessing (SMP) processor arrays are another matter.
CodePlay's auto-parallelization VectorC compiler makes this job manageable. Its approach is different from other explicit parallel programming environments like OpenMP. Programmers still have to identify areas within an application where parallelism will be useful, but the kind of annotation is relatively simple.
For example, sieve blocks identify the range where parallelism can be exploited (see the code). Special data types such as IntSum identify variables that will be replicated across multiple processors in a controlled fashion. Similar definitions are used to aggregate results. The idea is to minimize the job of the programmer even when dealing with legacy code.
The compiler checks dependencies and generates code that can fit a range of targets, including SIMD and singlecore processors. A key differentiator between it and other approaches is that the compiler's ability to generate code can be debugged on a single core platform. No changes to the source code are required to target a multicore platform. Debugging on a single core is significantly easier.
CodePlay
www.codeplay.com