Multicore My Way

Jan. 18, 2007
The trend today isn’t just putting multiple cores on one chip, but also connecting multicore chips to build large systems.

Multicore designs are all the rage, and the reasons why are easy to understand. Faster clock rates are shooting power consumption and heat dissipation through the roof. Major architectural improvements have been implemented, and larger caches simply take up space.

If all things were linearly related, tradeoffs would be more interesting. But in practice, using more processor cores running at a lower clock rate provides more throughput while consuming less power. Sun Microsystems' UltraSparc T1 chip layout also highlights the issue of chip real estate (see the figure).

The move to 90- and 45-nm technologies significantly raises the number of cores that can fit on a chip. Likewise, the area allocated to things other than the processor core has been growing significantly. Caches are expanding, and memory controllers have moved on-chip for many architectures.

Multiple-core architectures are quite varied (see the table). Typically, an architecture is optimized for its main target environment. For example, Azul Systems' Vega 2 targets the Java enterprise market where Java applications run in a J2EE (Java 2 Enterprise Edition) environment (see "Lots Of Java").

Instruction sets don't make much of a difference in multiple cores, since Intel and AMD share a common 32and 64-bit instruction set architecture (ISA)—well, almost, but the overlap is well over 95%. AMD uses HyperTransport links to connect chips together (see "HyperTransport: The Ties That Bind"), while Intel uses a central memory-controller architecture (see "Memory Front And Center").

Benchmarks often highlight architectural differences, but keep in mind the phrase "liars, damn liars, and chip vendors." A more important issue when looking at multicore chips will be threading in application software.

Putting multiple cores on-chip will benefit all but the system that runs one major application with one thread. Mileage may vary, but the benefits of a large number of cores are likely to remain greater for servers until application developers adjust to the plethora of threads.

It's interesting to note that only some multicore chips implement multithreading. The bottom line is that this class of chips should really be rated by the number of threads they can execute simultaneously. Also, parallel programming languages are a hot research topic—the hardware has finally caught up to their needs. But that's another story.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!