Multicore My Way

Jan. 18, 2007
The trend today isn’t just putting multiple cores on one chip, but also connecting multicore chips to build large systems.

Multicore designs are all the rage, and the reasons why are easy to understand. Faster clock rates are shooting power consumption and heat dissipation through the roof. Major architectural improvements have been implemented, and larger caches simply take up space.

If all things were linearly related, tradeoffs would be more interesting. But in practice, using more processor cores running at a lower clock rate provides more throughput while consuming less power. Sun Microsystems' UltraSparc T1 chip layout also highlights the issue of chip real estate (see the figure).

The move to 90- and 45-nm technologies significantly raises the number of cores that can fit on a chip. Likewise, the area allocated to things other than the processor core has been growing significantly. Caches are expanding, and memory controllers have moved on-chip for many architectures.

Multiple-core architectures are quite varied (see the table). Typically, an architecture is optimized for its main target environment. For example, Azul Systems' Vega 2 targets the Java enterprise market where Java applications run in a J2EE (Java 2 Enterprise Edition) environment (see "Lots Of Java").

Instruction sets don't make much of a difference in multiple cores, since Intel and AMD share a common 32and 64-bit instruction set architecture (ISA)—well, almost, but the overlap is well over 95%. AMD uses HyperTransport links to connect chips together (see "HyperTransport: The Ties That Bind"), while Intel uses a central memory-controller architecture (see "Memory Front And Center").

Benchmarks often highlight architectural differences, but keep in mind the phrase "liars, damn liars, and chip vendors." A more important issue when looking at multicore chips will be threading in application software.

Putting multiple cores on-chip will benefit all but the system that runs one major application with one thread. Mileage may vary, but the benefits of a large number of cores are likely to remain greater for servers until application developers adjust to the plethora of threads.

It's interesting to note that only some multicore chips implement multithreading. The bottom line is that this class of chips should really be rated by the number of threads they can execute simultaneously. Also, parallel programming languages are a hot research topic—the hardware has finally caught up to their needs. But that's another story.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

TTI Rail Transit Line Card

April 8, 2024
TTI stocks premier interconnect, passive and electromechanical components for rail systems as diverse as door control, HVAC and cabin entertainment, trackside safety, communications...

Littelfuse: Take Charge for Peak Performance in Material Handling Evs

April 8, 2024
As material handling electric vehicles such as automated guided vehicles (AGVs), autonomous mobile robots (AMRs) and forklifts become an integral part of Industry 4.0, Littelfuse...


To join the conversation, and become an exclusive member of Electronic Design, create an account today!